Balancing Accuracy and Computational Efficiency for Ternary Gas Hydrate Systems
NASA Astrophysics Data System (ADS)
White, M. D.
2011-12-01
phase transitions. This paper describes and demonstrates a numerical solution scheme for ternary hydrate systems that seeks a balance between accuracy and computational efficiency. This scheme uses a generalize cubic equation of state, functional forms for the hydrate equilibria and cage occupancies, variable switching scheme for phase transitions, and kinetic exchange of hydrate formers (i.e., CH4, CO2, and N2) between the mobile phases (i.e., aqueous, liquid CO2, and gas) and hydrate phase. Accuracy of the scheme will be evaluated by comparing property values and phase equilibria against experimental data. Computational efficiency of the scheme will be evaluated by comparing the base scheme against variants. The application of interest will the production of a natural gas hydrate deposit from a geologic formation, using the guest molecule exchange process; where, a mixture of CO2 and N2 are injected into the formation. During the guest-molecule exchange, CO2 and N2 will predominately replace CH4 in the large and small cages of the sI structure, respectively.
NASA Technical Reports Server (NTRS)
Pulliam, T. H.; Steger, J. L.
1985-01-01
In 1977 and 1978, general purpose centrally space differenced implicit finite difference codes in two and three dimensions have been introduced. These codes, now called ARC2D and ARC3D, can run either in inviscid or viscous mode for steady or unsteady flow. Since the introduction of the ARC2D and ARC3D codes, overall computational efficiency could be improved by making use of a number of algorithmic changes. These changes are related to the use of a spatially varying time step, the use of a sequence of mesh refinements to establish approximate solutions, implementation of various ways to reduce inversion work, improved numerical dissipation terms, and more implicit treatment of terms. The present investigation has the objective to describe the considered improvements and to quantify advantages and disadvantages. It is found that using established and simple procedures, a computer code can be maintained which is competitive with specialized codes.
Clark, Tanner C; Schmidt, Frank H
2013-01-01
Background. Since the introduction of robot-assisted navigation in primary total knee arthroplasty (TKA), there has been little research conducted examining the efficiency and accuracy of the system compared to computer-assisted navigation systems. Objective. To compare the efficiency and accuracy of Praxim robot-assisted navigation (RAN) and Stryker computer-assisted navigation (CAN) in primary TKA. Methods. This was a retrospective study consisting of 52 patients who underwent primary TKA utilizing RAN and 29 patients utilizing CAN. The primary outcome measure was navigation time. Secondary outcome measures included intraoperative final mechanical axis alignment, intraoperative robot-assisted bone cut accuracy, tourniquet time, and hospitalization length. Results. RAN navigation times were, on average, 9.0 minutes shorter compared to CAN after adjustment. The average absolute intraoperative malalignment was 0.5° less in the RAN procedures compared to the CAN procedures after adjustment. Patients in the RAN group tended to be discharged 0.6 days earlier compared to patients in the CAN group after adjustment. Conclusions. Among patients undergoing TKA, there was decreased navigation time, decreased final malalignment, and decreased hospitalization length associated with the use of RAN when compared to CAN independent of age, BMI, and pre-replacement alignment.
NASA Astrophysics Data System (ADS)
Russakoff, Arthur; Li, Yonghui; He, Shenglai; Varga, Kalman
2016-05-01
Time-dependent Density Functional Theory (TDDFT) has become successful for its balance of economy and accuracy. However, the application of TDDFT to large systems or long time scales remains computationally prohibitively expensive. In this paper, we investigate the numerical stability and accuracy of two subspace propagation methods to solve the time-dependent Kohn-Sham equations with finite and periodic boundary conditions. The bases considered are the Lánczos basis and the adiabatic eigenbasis. The results are compared to a benchmark fourth-order Taylor expansion of the time propagator. Our results show that it is possible to use larger time steps with the subspace methods, leading to computational speedups by a factor of 2-3 over Taylor propagation. Accuracy is found to be maintained for certain energy regimes and small time scales.
NASA Astrophysics Data System (ADS)
Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas
2016-09-01
A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.
Ragheb, Hossein; Thacker, Neil A.; Guyader, Jean-Marie; Klein, Stefan; deSouza, Nandita M.; Jackson, Alan
2015-01-01
This study describes post-processing methodologies to reduce the effects of physiological motion in measurements of apparent diffusion coefficient (ADC) in the liver. The aims of the study are to improve the accuracy of ADC measurements in liver disease to support quantitative clinical characterisation and reduce the number of patients required for sequential studies of disease progression and therapeutic effects. Two motion correction methods are compared, one based on non-rigid registration (NRA) using freely available open source algorithms and the other a local-rigid registration (LRA) specifically designed for use with diffusion weighted magnetic resonance (DW-MR) data. Performance of these methods is evaluated using metrics computed from regional ADC histograms on abdominal image slices from healthy volunteers. While the non-rigid registration method has the advantages of being applicable on the whole volume and in a fully automatic fashion, the local-rigid registration method is faster while maintaining the integrity of the biological structures essential for analysis of tissue heterogeneity. Our findings also indicate that the averaging commonly applied to DW-MR images as part of the acquisition protocol should be avoided if possible. PMID:26204105
Increasing Accuracy in Computed Inviscid Boundary Conditions
NASA Technical Reports Server (NTRS)
Dyson, Roger
2004-01-01
A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number
Computationally efficient control allocation
NASA Technical Reports Server (NTRS)
Durham, Wayne (Inventor)
2001-01-01
A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.
NASA Astrophysics Data System (ADS)
Stockman, Harlan W.; Glass, Robert J.; Cooper, Clay; Rajaram, Harihar
In the presence of buoyancy, multiple diffusion coefficients, and porous media, the dispersion of solutes can be remarkably complex. The lattice-Boltzmann (LB) method is ideal for modeling dispersion in flow through complex geometries; yet, LB models of solute fingers or slugs can suffer from peculiar numerical conditions (e.g., denormal generation) that degrade computational performance by factors of 6 or more. Simple code optimizations recover performance and yield simulation rates up to ~3 million site updates per second on inexpensive, single-CPU systems. Two examples illustrate limits of the methods: (1) Dispersion of solute in a thin duct is often approximated with dispersion between infinite parallel plates. However, Doshi, Daiya and Gill (DDG) showed that for a smooth-walled duct, this approximation is in error by a factor of ~8. But in the presence of wall roughness (found in all real fractures), the DDG phenomenon can be diminished. (2) Double-diffusive convection drives "salt-fingering", a process for mixing of fresh-cold and warm-salty waters in many coastal regions. Fingering experiments are typically performed in Hele-Shaw cells, and can be modeled with the 2D (pseudo-3D) LB method with velocity-proportional drag forces. However, the 2D models cannot capture Taylor-Aris dispersion from the cell walls. We compare 2D and true 3D fingering models against observations from laboratory experiments.
Approximate Algorithms for Computing Spatial Distance Histograms with Accuracy Guarantees
Grupcev, Vladimir; Yuan, Yongke; Tu, Yi-Cheng; Huang, Jin; Chen, Shaoping; Pandit, Sagar; Weng, Michael
2014-01-01
Particle simulation has become an important research tool in many scientific and engineering fields. Data generated by such simulations impose great challenges to database storage and query processing. One of the queries against particle simulation data, the spatial distance histogram (SDH) query, is the building block of many high-level analytics, and requires quadratic time to compute using a straightforward algorithm. Previous work has developed efficient algorithms that compute exact SDHs. While beating the naive solution, such algorithms are still not practical in processing SDH queries against large-scale simulation data. In this paper, we take a different path to tackle this problem by focusing on approximate algorithms with provable error bounds. We first present a solution derived from the aforementioned exact SDH algorithm, and this solution has running time that is unrelated to the system size N. We also develop a mathematical model to analyze the mechanism that leads to errors in the basic approximate algorithm. Our model provides insights on how the algorithm can be improved to achieve higher accuracy and efficiency. Such insights give rise to a new approximate algorithm with improved time/accuracy tradeoff. Experimental results confirm our analysis. PMID:24693210
Approximate Algorithms for Computing Spatial Distance Histograms with Accuracy Guarantees.
Grupcev, Vladimir; Yuan, Yongke; Tu, Yi-Cheng; Huang, Jin; Chen, Shaoping; Pandit, Sagar; Weng, Michael
2012-09-01
Particle simulation has become an important research tool in many scientific and engineering fields. Data generated by such simulations impose great challenges to database storage and query processing. One of the queries against particle simulation data, the spatial distance histogram (SDH) query, is the building block of many high-level analytics, and requires quadratic time to compute using a straightforward algorithm. Previous work has developed efficient algorithms that compute exact SDHs. While beating the naive solution, such algorithms are still not practical in processing SDH queries against large-scale simulation data. In this paper, we take a different path to tackle this problem by focusing on approximate algorithms with provable error bounds. We first present a solution derived from the aforementioned exact SDH algorithm, and this solution has running time that is unrelated to the system size N. We also develop a mathematical model to analyze the mechanism that leads to errors in the basic approximate algorithm. Our model provides insights on how the algorithm can be improved to achieve higher accuracy and efficiency. Such insights give rise to a new approximate algorithm with improved time/accuracy tradeoff. Experimental results confirm our analysis.
Computer guided implantology accuracy and complications.
Bruno, Vincenzo; Badino, Mauro; Riccitiello, Francesco; Spagnuolo, Gianrico; Amato, Massimo
2013-01-01
The computer-based method allows the computerized planning of a surgical implantology procedure, using computed tomography (CT) of the maxillary bones and prosthesis. This procedure, however, is not error-free, unless the operator has been well trained and strictly follows the protocol. A 70-year-old woman whom was edentulous asked for a lower jaw implant-supported prosthesis. A computer-guided surgery was planned with an immediate loading according to the NobelGuide technique. However, prior to surgery, new dentures were constructed to adjust the vertical dimension. An interim screwed metal-resin prosthesis was delivered just after the surgery; however, after only two weeks, it was removed because of a complication. Finally, a screwed implant bridge was delivered. The computer guided surgery is a useful procedure when based on an accurate 3D CT-based image data and an implant planning software which minimizes errors.
Computer Guided Implantology Accuracy and Complications
Bruno, Vincenzo; Badino, Mauro; Riccitiello, Francesco; Spagnuolo, Gianrico; Amato, Massimo
2013-01-01
The computer-based method allows the computerized planning of a surgical implantology procedure, using computed tomography (CT) of the maxillary bones and prosthesis. This procedure, however, is not error-free, unless the operator has been well trained and strictly follows the protocol. A 70-year-old woman whom was edentulous asked for a lower jaw implant-supported prosthesis. A computer-guided surgery was planned with an immediate loading according to the NobelGuide technique. However, prior to surgery, new dentures were constructed to adjust the vertical dimension. An interim screwed metal-resin prosthesis was delivered just after the surgery; however, after only two weeks, it was removed because of a complication. Finally, a screwed implant bridge was delivered. The computer guided surgery is a useful procedure when based on an accurate 3D CT-based image data and an implant planning software which minimizes errors. PMID:24083034
Efficient Universal Blind Quantum Computation
NASA Astrophysics Data System (ADS)
Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G.
2013-12-01
We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party’s quantum computer without revealing either which computation is performed, or its input and output. The first party’s computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation.
Accuracy and speed in computing the Chebyshev collocation derivative
NASA Technical Reports Server (NTRS)
Don, Wai-Sun; Solomonoff, Alex
1991-01-01
We studied several algorithms for computing the Chebyshev spectral derivative and compare their roundoff error. For a large number of collocation points, the elements of the Chebyshev differentiation matrix, if constructed in the usual way, are not computed accurately. A subtle cause is is found to account for the poor accuracy when computing the derivative by the matrix-vector multiplication method. Methods for accurately computing the elements of the matrix are presented, and we find that if the entities of the matrix are computed accurately, the roundoff error of the matrix-vector multiplication is as small as that of the transform-recursion algorithm. Results of CPU time usage are shown for several different algorithms for computing the derivative by the Chebyshev collocation method for a wide variety of two-dimensional grid sizes on both an IBM and a Cray 2 computer. We found that which algorithm is fastest on a particular machine depends not only on the grid size, but also on small details of the computer hardware as well. For most practical grid sizes used in computation, the even-odd decomposition algorithm is found to be faster than the transform-recursion method.
Thermal radiation view factor: Methods, accuracy and computer-aided procedures
NASA Technical Reports Server (NTRS)
Kadaba, P. V.
1982-01-01
The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.
Accuracy of computer-assisted implant placement with insertion templates
Naziri, Eleni; Schramm, Alexander; Wilde, Frank
2016-01-01
Objectives: The purpose of this study was to assess the accuracy of computer-assisted implant insertion based on computed tomography and template-guided implant placement. Material and methods: A total of 246 implants were placed with the aid of 3D-based transfer templates in 181 consecutive partially edentulous patients. Five groups were formed on the basis of different implant systems, surgical protocols and guide sleeves. After virtual implant planning with the CoDiagnostiX Software, surgical guides were fabricated in a dental laboratory. After implant insertion, the actual implant position was registered intraoperatively and transferred to a model cast. Deviations between the preoperative plan and postoperative implant position were measured in a follow-up computed tomography of the patient’s model casts and image fusion with the preoperative computed tomography. Results: The median deviation between preoperative plan and postoperative implant position was 1.0 mm at the implant shoulder and 1.4 mm at the implant apex. The median angular deviation was 3.6º. There were significantly smaller angular deviations (P=0.000) and significantly lower deviations at the apex (P=0.008) in implants placed for a single-tooth restoration than in those placed at a free-end dental arch. The location of the implant, whether in the upper or lower jaw, did not significantly affect deviations. Increasing implant length had a significant negative influence on deviations from the planned implant position. There was only one significant difference between two out of the five implant systems used. Conclusion: The data of this clinical study demonstrate the accuracy and predictable implant placement when using laboratory-fabricated surgical guides based on computed tomography. PMID:27274440
Fukuda, Ryoichi Ehara, Masahiro
2014-10-21
Solvent effects on electronic excitation spectra are considerable in many situations; therefore, we propose an efficient and reliable computational scheme that is based on the symmetry-adapted cluster-configuration interaction (SAC-CI) method and the polarizable continuum model (PCM) for describing electronic excitations in solution. The new scheme combines the recently proposed first-order PCM SAC-CI method with the PTE (perturbation theory at the energy level) PCM SAC scheme. This is essentially equivalent to the usual SAC and SAC-CI computations with using the PCM Hartree-Fock orbital and integrals, except for the additional correction terms that represent solute-solvent interactions. The test calculations demonstrate that the present method is a very good approximation of the more costly iterative PCM SAC-CI method for excitation energies of closed-shell molecules in their equilibrium geometry. This method provides very accurate values of electric dipole moments but is insufficient for describing the charge-transfer (CT) indices in polar solvent. The present method accurately reproduces the absorption spectra and their solvatochromism of push-pull type 2,2{sup ′}-bithiophene molecules. Significant solvent and substituent effects on these molecules are intuitively visualized using the CT indices. The present method is the simplest and theoretically consistent extension of SAC-CI method for including PCM environment, and therefore, it is useful for theoretical and computational spectroscopy.
NASA Technical Reports Server (NTRS)
Ecer, A.; Akay, H. U.
1981-01-01
The finite element method is applied for the solution of transonic potential flows through a cascade of airfoils. Convergence characteristics of the solution scheme are discussed. Accuracy of the numerical solutions is investigated for various flow regions in the transonic flow configuration. The design of an efficient finite element computational grid is discussed for improving accuracy and convergence.
Localization accuracy of sphere fiducials in computed tomography images
NASA Astrophysics Data System (ADS)
Kobler, Jan-Philipp; Díaz Díaz, Jesus; Fitzpatrick, J. Michael; Lexow, G. Jakob; Majdani, Omid; Ortmaier, Tobias
2014-03-01
In recent years, bone-attached robots and microstereotactic frames have attracted increasing interest due to the promising targeting accuracy they provide. Such devices attach to a patient's skull via bone anchors, which are used as landmarks during intervention planning as well. However, as simulation results reveal, the performance of such mechanisms is limited by errors occurring during the localization of their bone anchors in preoperatively acquired computed tomography images. Therefore, it is desirable to identify the most suitable fiducials as well as the most accurate method for fiducial localization. We present experimental results of a study focusing on the fiducial localization error (FLE) of spheres. Two phantoms equipped with fiducials made from ferromagnetic steel and titanium, respectively, are used to compare two clinically available imaging modalities (multi-slice CT (MSCT) and cone-beam CT (CBCT)), three localization algorithms as well as two methods for approximating the FLE. Furthermore, the impact of cubic interpolation applied to the images is investigated. Results reveal that, generally, the achievable localization accuracy in CBCT image data is significantly higher compared to MSCT imaging. The lowest FLEs (approx. 40 μm) are obtained using spheres made from titanium, CBCT imaging, template matching based on cross correlation for localization, and interpolating the images by a factor of sixteen. Nevertheless, the achievable localization accuracy of spheres made from steel is only slightly inferior. The outcomes of the presented study will be valuable considering the optimization of future microstereotactic frame prototypes as well as the operative workflow.
Efficient computation of optimal actions.
Todorov, Emanuel
2009-07-14
Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress--as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant.
Efficient computation of optimal actions
Todorov, Emanuel
2009-01-01
Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress—as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant. PMID:19574462
Analysis of deformable image registration accuracy using computational modeling
Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.
2010-03-15
Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter
Assessment of optical localizer accuracy for computer aided surgery systems.
Elfring, Robert; de la Fuente, Matías; Radermacher, Klaus
2010-01-01
The technology for localization of surgical tools with respect to the patient's reference coordinate system in three to six degrees of freedom is one of the key components in computer aided surgery. Several tracking methods are available, of which optical tracking is the most widespread in clinical use. Optical tracking technology has proven to be a reliable method for intra-operative position and orientation acquisition in many clinical applications; however, the accuracy of such localizers is still a topic of discussion. In this paper, the accuracy of three optical localizer systems, the NDI Polaris P4, the NDI Polaris Spectra (in active and passive mode) and the Stryker Navigation System II camera, is assessed and compared critically. Static tests revealed that only the Polaris P4 shows significant warm-up behavior, with a significant shift of accuracy being observed within 42 minutes of being switched on. Furthermore, the intrinsic localizer accuracy was determined for single markers as well as for tools using a volumetric measurement protocol on a coordinate measurement machine. To determine the relative distance error within the measurement volume, the Length Measurement Error (LME) was determined at 35 test lengths. As accuracy depends strongly on the marker configuration employed, the error to be expected in typical clinical setups was estimated in a simulation for different tool configurations. The two active localizer systems, the Stryker Navigation System II camera and the Polaris Spectra (active mode), showed the best results, with trueness values (mean +/- standard deviation) of 0.058 +/- 0.033 mm and 0.089 +/- 0.061 mm, respectively. The Polaris Spectra (passive mode) showed a trueness of 0.170 +/- 0.090 mm, and the Polaris P4 showed the lowest trueness at 0.272 +/- 0.394 mm with a higher number of outliers than for the other cameras. The simulation of the different tool configurations in a typical clinical setup revealed that the tracking error can
An Automatic K-Point Grid Generation Scheme for Enhanced Efficiency and Accuracy in DFT Calculations
NASA Astrophysics Data System (ADS)
Mohr, Jennifer A.-F.; Shepherd, James J.; Alavi, Ali
2013-03-01
We seek to create an automatic k-point grid generation scheme for density functional theory (DFT) calculations that improves the efficiency and accuracy of the calculations and is suitable for use in high-throughput computations. Current automated k-point generation schemes often result in calculations with insufficient k-points, which reduces the reliability of the results, or too many k-points, which can significantly increase computational cost. By controlling a wider range of k-point grid densities for the Brillouin zone based upon factors of conductivity and symmetry, a scalable k-point grid generation scheme can lower calculation runtimes and improve the accuracy of energy convergence. Johns Hopkins University
Improving the Accuracy of CT Colonography Interpretation: Computer-Aided Diagnosis
Summers, Ronald M.
2010-01-01
Synopsis Computer-aided polyp detection aims to improve the accuracy of the colonography interpretation. The computer searches the colonic wall to look for polyp-like protrusions and presents a list of suspicious areas to a physician for further analysis. Computer-aided polyp detection has developed rapidly over the past decade and in the laboratory setting and has sensitivities comparable to those of experts. Computer-aided polyp detection tends to help inexperienced readers more than experienced ones and may also lead to small reductions in specificity. In its currently proposed use as an adjunct to standard image interpretation, computer-aided polyp detection serves as a spellchecker rather than an efficiency enhancer. PMID:20451814
Accuracy of Computer-Generated, Spanish-Language Medicine Labels
Sharif, Iman; Tse, Julia
2011-01-01
OBJECTIVE We evaluated the accuracy of translated, Spanish-language medicine labels among pharmacies in a borough with a large Spanish-speaking population. METHODS A cross-sectional, telephone survey of all pharmacies in the Bronx, New York, was performed. Selected pharmacies were visited to learn about the computer software being used to generate Spanish medicine labels. Outcomes included the proportion of pharmacies providing Spanish medicine labels, frequency of computerized translation, and description of Spanish medicine labels produced. RESULTS Of 316 pharmacies, 286 (91%) participated. Overall, 209 (73%) provided medicine labels in Spanish. Independent pharmacies were significantly more likely to provide Spanish labels than were hospital or chain pharmacies (88% vs 57% vs 32%; P < .0001). Pharmacies that provided Spanish labels mostly commonly (86%) used computer programs to do so; 11% used lay staff members, and 3% used a professional interpreter. We identified 14 different computer programs used to generate Spanish labels, with 70% of pharmacies using 1 of 3 major programs. We evaluated 76 medicine labels generated by 13 different computer programs. Overall, 32 Spanish labels (43%) included incomplete translations (a mixture of English and Spanish), and 6 additional labels contained misspellings or grammar errors, which resulted in an overall error rate of 50%. CONCLUSIONS Although pharmacies were likely to provide medicine labels translated into Spanish, the quality of the translations was inconsistent and potentially hazardous. Unless regulations and funding support the technological advances needed to ensure the safety of such labeling, we risk perpetuating health disparities for populations with limited English proficiency. PMID:20368321
Stratified computed tomography findings improve diagnostic accuracy for appendicitis
Park, Geon; Lee, Sang Chul; Choi, Byung-Jo; Kim, Say-June
2014-01-01
AIM: To improve the diagnostic accuracy in patients with symptoms and signs of appendicitis, but without confirmative computed tomography (CT) findings. METHODS: We retrospectively reviewed the database of 224 patients who had been operated on for the suspicion of appendicitis, but whose CT findings were negative or equivocal for appendicitis. The patient population was divided into two groups: a pathologically proven appendicitis group (n = 177) and a non-appendicitis group (n = 47). The CT images of these patients were re-evaluated according to the characteristic CT features as described in the literature. The re-evaluations and baseline characteristics of the two groups were compared. RESULTS: The two groups showed significant differences with respect to appendiceal diameter, and the presence of periappendiceal fat stranding and intraluminal air in the appendix. A larger proportion of patients in the appendicitis group showed distended appendices larger than 6.0 mm (66.3% vs 37.0%; P < 0.001), periappendiceal fat stranding (34.1% vs 8.9%; P = 0.001), and the absence of intraluminal air (67.6% vs 48.9%; P = 0.024) compared to the non-appendicitis group. Furthermore, the presence of two or more of these factors increased the odds ratio to 6.8 times higher than baseline (95%CI: 3.013-15.454; P < 0.001). CONCLUSION: Appendiceal diameter and wall thickening, fat stranding, and absence of intraluminal air can be used to increased diagnostic accuracy for appendicitis with equivocal CT findings. PMID:25320531
Computing Efficiency Of Transfer Of Microwave Power
NASA Technical Reports Server (NTRS)
Pinero, L. R.; Acosta, R.
1995-01-01
BEAM computer program enables user to calculate microwave power-transfer efficiency between two circular apertures at arbitrary range. Power-transfer efficiency obtained numerically. Two apertures have generally different sizes and arbitrary taper illuminations. BEAM also analyzes effect of distance and taper illumination on transmission efficiency for two apertures of equal size. Written in FORTRAN.
Efficient computation of parameter confidence intervals
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.
1987-01-01
An important step in system identification of aircraft is the estimation of stability and control derivatives from flight data along with an assessment of parameter accuracy. When the maximum likelihood estimation technique is used, parameter accuracy is commonly assessed by the Cramer-Rao lower bound. It is known, however, that in some cases the lower bound can be substantially different from the parameter variance. Under these circumstances the Cramer-Rao bounds may be misleading as an accuracy measure. This paper discusses the confidence interval estimation problem based on likelihood ratios, which offers a more general estimate of the error bounds. Four approaches are considered for computing confidence intervals of maximum likelihood parameter estimates. Each approach is applied to real flight data and compared.
Value and Accuracy of Multidetector Computed Tomography in Obstructive Jaundice
Mathew, Rishi Philip; Moorkath, Abdunnisar; Basti, Ram Shenoy; Suresh, Hadihally B.
2016-01-01
Summary Background Objective; To find out the role of MDCT in the evaluation of obstructive jaundice with respect to the cause and level of the obstruction, and its accuracy. To identify the advantages of MDCT with respect to other imaging modalities. To correlate MDCT findings with histopathology/surgical findings/Endoscopic Retrograde CholangioPancreatography (ERCP) findings as applicable. Material/Methods This was a prospective study conducted over a period of one year from August 2014 to August 2015. Data were collected from 50 patients with clinically suspected obstructive jaundice. CT findings were correlated with histopathology/surgical findings/ERCP findings as applicable. Results Among the 50 people studied, males and females were equal in number, and the majority belonged to the 41–60 year age group. The major cause for obstructive jaundice was choledocholithiasis. MDCT with reformatting techniques was very accurate in picking a mass as the cause for biliary obstruction and was able to differentiate a benign mass from a malignant one with high accuracy. There was 100% correlation between the CT diagnosis and the final diagnosis regarding the level and type of obstruction. MDCT was able to determine the cause of obstruction with an accuracy of 96%. Conclusions MDCT with good reformatting techniques has excellent accuracy in the evaluation of obstructive jaundice with regards to the level and cause of obstruction. PMID:27429673
Efficient computations of quantum canonical Gibbs state in phase space
NASA Astrophysics Data System (ADS)
Bondar, Denys I.; Campos, Andre G.; Cabrera, Renan; Rabitz, Herschel A.
2016-06-01
The Gibbs canonical state, as a maximum entropy density matrix, represents a quantum system in equilibrium with a thermostat. This state plays an essential role in thermodynamics and serves as the initial condition for nonequilibrium dynamical simulations. We solve a long standing problem for computing the Gibbs state Wigner function with nearly machine accuracy by solving the Bloch equation directly in the phase space. Furthermore, the algorithms are provided yielding high quality Wigner distributions for pure stationary states as well as for Thomas-Fermi and Bose-Einstein distributions. The developed numerical methods furnish a long-sought efficient computation framework for nonequilibrium quantum simulations directly in the Wigner representation.
Efficient computations of quantum canonical Gibbs state in phase space.
Bondar, Denys I; Campos, Andre G; Cabrera, Renan; Rabitz, Herschel A
2016-06-01
The Gibbs canonical state, as a maximum entropy density matrix, represents a quantum system in equilibrium with a thermostat. This state plays an essential role in thermodynamics and serves as the initial condition for nonequilibrium dynamical simulations. We solve a long standing problem for computing the Gibbs state Wigner function with nearly machine accuracy by solving the Bloch equation directly in the phase space. Furthermore, the algorithms are provided yielding high quality Wigner distributions for pure stationary states as well as for Thomas-Fermi and Bose-Einstein distributions. The developed numerical methods furnish a long-sought efficient computation framework for nonequilibrium quantum simulations directly in the Wigner representation. PMID:27415384
A Computationally Efficient Algorithm for Aerosol Phase Equilibrium
Zaveri, Rahul A.; Easter, Richard C.; Peters, Len K.; Wexler, Anthony S.
2004-10-04
Three-dimensional models of atmospheric inorganic aerosols need an accurate yet computationally efficient thermodynamic module that is repeatedly used to compute internal aerosol phase state equilibrium. In this paper, we describe the development and evaluation of a computationally efficient numerical solver called MESA (Multicomponent Equilibrium Solver for Aerosols). The unique formulation of MESA allows iteration of all the equilibrium equations simultaneously while maintaining overall mass conservation and electroneutrality in both the solid and liquid phases. MESA is unconditionally stable, shows robust convergence, and typically requires only 10 to 20 single-level iterations (where all activity coefficients and aerosol water content are updated) per internal aerosol phase equilibrium calculation. Accuracy of MESA is comparable to that of the highly accurate Aerosol Inorganics Model (AIM), which uses a rigorous Gibbs free energy minimization approach. Performance evaluation will be presented for a number of complex multicomponent mixtures commonly found in urban and marine tropospheric aerosols.
Real-time lens distortion correction: speed, accuracy and efficiency
NASA Astrophysics Data System (ADS)
Bax, Michael R.; Shahidi, Ramin
2014-11-01
Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.
NASA Astrophysics Data System (ADS)
Sibaev, Marat; Crittenden, Deborah L.
2016-08-01
This work describes the benchmarking of a vibrational configuration interaction (VCI) algorithm that combines the favourable computational scaling of VPT2 with the algorithmic robustness of VCI, in which VCI basis states are selected according to the magnitude of their contribution to the VPT2 energy, for the ground state and fundamental excited states. Particularly novel aspects of this work include: expanding the potential to 6th order in normal mode coordinates, using a double-iterative procedure in which configuration selection and VCI wavefunction updates are performed iteratively (micro-iterations) over a range of screening threshold values (macro-iterations), and characterisation of computational resource requirements as a function of molecular size. Computational costs may be further reduced by a priori truncation of the VCI wavefunction according to maximum extent of mode coupling, along with discarding negligible force constants and VCI matrix elements, and formulating the wavefunction in a harmonic oscillator product basis to enable efficient evaluation of VCI matrix elements. Combining these strategies, we define a series of screening procedures that scale as O ( Nmode 6 ) - O ( Nmode 9 ) in run time and O ( Nmode 6 ) - O ( Nmode 7 ) in memory, depending on the desired level of accuracy. Our open-source code is freely available for download from http://www.sourceforge.net/projects/pyvci-vpt2.
Accuracy vs. computational time: translating aortic simulations to the clinic.
Brown, Alistair G; Shi, Yubing; Marzo, Alberto; Staicu, Cristina; Valverde, Isra; Beerbaum, Philipp; Lawford, Patricia V; Hose, D Rodney
2012-02-01
State of the art simulations of aortic haemodynamics feature full fluid-structure interaction (FSI) and coupled 0D boundary conditions. Such analyses require not only significant computational resource but also weeks to months of run time, which compromises the effectiveness of their translation to a clinical workflow. This article employs three computational fluid methodologies, of varying levels of complexity with coupled 0D boundary conditions, to simulate the haemodynamics within a patient-specific aorta. The most comprehensive model is a full FSI simulation. The simplest is a rigid walled incompressible fluid simulation while an alternative middle-ground approach employs a compressible fluid, tuned to elicit a response analogous to the compliance of the aortic wall. The results demonstrate that, in the context of certain clinical questions, the simpler analysis methods may capture the important characteristics of the flow field.
Computationally efficient method to construct scar functions
NASA Astrophysics Data System (ADS)
Revuelta, F.; Vergini, E. G.; Benito, R. M.; Borondo, F.
2012-02-01
The performance of a simple method [E. L. Sibert III, E. Vergini, R. M. Benito, and F. Borondo, New J. Phys.NJOPFM1367-263010.1088/1367-2630/10/5/053016 10, 053016 (2008)] to efficiently compute scar functions along unstable periodic orbits with complicated trajectories in configuration space is discussed, using a classically chaotic two-dimensional quartic oscillator as an illustration.
NASA Technical Reports Server (NTRS)
Bonhaus, Daryl L.; Wornom, Stephen F.
1991-01-01
Two codes which solve the 3-D Thin Layer Navier-Stokes (TLNS) equations are used to compute the steady state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used to determine the effects of grid refinement. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.
NASA Technical Reports Server (NTRS)
Bonhaus, Daryl L.; Wornom, Stephen F.
1990-01-01
In the present study, two codes which solve the three-dimensional Thin-Layer Navier-Stokes (TLNS) equations are used to compute the steady-state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.
NASA Astrophysics Data System (ADS)
Brzeziński, Dariusz W.; Ostalczyk, Piotr
2016-11-01
The aim of the paper is to gather and to survey various formulas for fractional order derivative and integral calculations according to their Grünwald-Letnikov definition. The paper presents evaluation results for some factors, which influence its accuracy and efficiency of computations: different formulas for the coefficients calculations, different forms of the formula and different schemas for the function discretization, including more points per step of summation. It also hints how to solve some of the serious programming issues when applying the formulas. The objective is to determine a recipe for the increase the default low computational accuracy and efficiency of this popular method of fractional order derivative and integral calculations.
Changing computing paradigms towards power efficiency
Klavík, Pavel; Malossi, A. Cristiano I.; Bekas, Costas; Curioni, Alessandro
2014-01-01
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033
Changing computing paradigms towards power efficiency.
Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro
2014-06-28
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications.
Efficient communication in massively parallel computers
Cypher, R.E.
1989-01-01
A fundamental operation in parallel computation is sorting. Sorting is important not only because it is required by many algorithms, but also because it can be used to implement irregular, pointer-based communication. The author studies two algorithms for sorting in massively parallel computers. First, he examines Shellsort. Shellsort is a sorting algorithm that is based on a sequence of parameters called increments. Shellsort can be used to create a parallel sorting device known as a sorting network. Researchers have suggested that if the correct increment sequence is used, an optimal size sorting network can be obtained. All published increment sequences have been monotonically decreasing. He shows that no monotonically decreasing increment sequence will yield an optimal size sorting network. Second, he presents a sorting algorithm called Cubesort. Cubesort is the fastest known sorting algorithm for a variety of parallel computers aver a wide range of parameters. He also presents a paradigm for developing parallel algorithms that have efficient communication. The paradigm, called the data reduction paradigm, consists of using a divide-and-conquer strategy. Both the division and combination phases of the divide-and-conquer algorithm may require irregular, pointer-based communication between processors. However, the problem is divided so as to limit the amount of data that must be communicated. As a result the communication can be performed efficiently. He presents data reduction algorithms for the image component labeling problem, the closest pair problem and four versions of the parallel prefix problem.
Convolutional networks for fast, energy-efficient neuromorphic computing
Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.
2016-01-01
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489
Increasing computational efficiency of cochlear models using boundary layers
NASA Astrophysics Data System (ADS)
Alkhairy, Samiya A.; Shera, Christopher A.
2015-12-01
Our goal is to develop methods to improve the efficiency of computational models of the cochlea for applications that require the solution accurately only within a basal region of interest, specifically by decreasing the number of spatial sections needed for simulation of the problem with good accuracy. We design algebraic spatial and parametric transformations to computational models of the cochlea. These transformations are applied after the basal region of interest and allow for spatial preservation, driven by the natural characteristics of approximate spatial causality of cochlear models. The project is of foundational nature and hence the goal is to design, characterize and develop an understanding and framework rather than optimization and globalization. Our scope is as follows: designing the transformations; understanding the mechanisms by which computational load is decreased for each transformation; development of performance criteria; characterization of the results of applying each transformation to a specific physical model and discretization and solution schemes. In this manuscript, we introduce one of the proposed methods (complex spatial transformation) for a case study physical model that is a linear, passive, transmission line model in which the various abstraction layers (electric parameters, filter parameters, wave parameters) are clearer than other models. This is conducted in the frequency domain for multiple frequencies using a second order finite difference scheme for discretization and direct elimination for solving the discrete system of equations. The performance is evaluated using two developed simulative criteria for each of the transformations. In conclusion, the developed methods serve to increase efficiency of a computational traveling wave cochlear model when spatial preservation can hold, while maintaining good correspondence with the solution of interest and good accuracy, for applications in which the interest is in the solution
Accuracy-rate tradeoffs: how do enzymes meet demands of selectivity and catalytic efficiency?
Tawfik, Dan S
2014-08-01
I discuss some physico-chemical and evolutionary aspects of enzyme accuracy (selectivity, specificity) and speed (turnover rate, processivity). Accuracy can be a beneficial side-product of active-sites being refined to proficiently convert a given substrate into one product. However, exclusion of undesirable, non-cognate substrates is also an explicitly evolved trait that may come with a cost. I define two schematic mechanisms. Ground-state discrimination applies to enzymes where selectivity is achieved primarily at the level of substrate binding. Exemplified by DNA methyltransferases and the ribosome, ground-state discrimination imposes strong accuracy-rate tradeoffs. Alternatively, transition-state discrimination, applies to relatively small substrates where substrate binding and chemistry are efficiently coupled, and evokes weaker tradeoffs. Overall, the mechanistic, structural and evolutionary basis of enzymatic accuracy-rate tradeoffs merits deeper understanding.
Efficient computations with the likelihood ratio distribution.
Kruijver, Maarten
2015-01-01
What is the probability that the likelihood ratio exceeds a threshold t, if a specified hypothesis is true? This question is asked, for instance, when performing power calculations for kinship testing, when computing true and false positive rates for familial searching and when computing the power of discrimination of a complex mixture. Answering this question is not straightforward, since there is are a huge number of possible genotypic combinations to consider. Different solutions are found in the literature. Several authors estimate the threshold exceedance probability using simulation. Corradi and Ricciardi [1] propose a discrete approximation to the likelihood ratio distribution which yields a lower and upper bound on the probability. Nothnagel et al. [2] use the normal distribution as an approximation to the likelihood ratio distribution. Dørum et al. [3] introduce an algorithm that can be used for exact computation, but this algorithm is computationally intensive, unless the threshold t is very large. We present three new approaches to the problem. Firstly, we show how importance sampling can be used to make the simulation approach significantly more efficient. Importance sampling is a statistical technique that turns out to work well in the current context. Secondly, we present a novel algorithm for computing exceedance probabilities. The algorithm is exact, fast and can handle relatively large problems. Thirdly, we introduce an approach that combines the novel algorithm with the discrete approximation of Corradi and Ricciardi. This last approach can be applied to very large problems and yields a lower and upper bound on the exceedance probability. The use of the different approaches is illustrated with examples from forensic genetics, such as kinship testing, familial searching and mixture interpretation. The algorithms are implemented in an R-package called DNAprofiles, which is freely available from CRAN.
Zhang, D.; Rahnema, F.
2013-07-01
The coarse mesh transport method (COMET) is a highly accurate and efficient computational tool which predicts whole-core neutronics behaviors for heterogeneous reactor cores via a pre-computed eigenvalue-dependent response coefficient (function) library. Recently, a high order perturbation method was developed to significantly improve the efficiency of the library generation method. In that work, the method's accuracy and efficiency was tested in a small PWR benchmark problem. This paper extends the application of the perturbation method to include problems typical of the other water reactor cores such as BWR and CANDU bundles. It is found that the response coefficients predicted by the perturbation method for typical BWR bundles agree very well with those directly computed by the Monte Carlo method. The average and maximum relative errors in the surface-to-surface response coefficients are 0.02%-0.05% and 0.06%-0.25%, respectively. For CANDU bundles, the corresponding quantities are 0.01%-0.05% and 0.04% -0.15%. It is concluded that the perturbation method is highly accurate and efficient with a wide range of applicability. (authors)
A more efficient anisotropic mesh adaptation for the computation of Lagrangian coherent structures
NASA Astrophysics Data System (ADS)
Fortin, A.; Briffard, T.; Garon, A.
2015-03-01
The computation of Lagrangian coherent structures is more and more used in fluid mechanics to determine subtle fluid flow structures. We present in this paper a new adaptive method for the efficient computation of Finite Time Lyapunov Exponent (FTLE) from which the coherent Lagrangian structures can be obtained. This new adaptive method considerably reduces the computational burden without any loss of accuracy on the FTLE field.
A primer on the energy efficiency of computing
Koomey, Jonathan G.
2015-03-30
The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.
Accuracy and Calibration of Computational Approaches for Inpatient Mortality Predictive Modeling.
Nakas, Christos T; Schütz, Narayan; Werners, Marcus; Leichtle, Alexander B
2016-01-01
Electronic Health Record (EHR) data can be a key resource for decision-making support in clinical practice in the "big data" era. The complete database from early 2012 to late 2015 involving hospital admissions to Inselspital Bern, the largest Swiss University Hospital, was used in this study, involving over 100,000 admissions. Age, sex, and initial laboratory test results were the features/variables of interest for each admission, the outcome being inpatient mortality. Computational decision support systems were utilized for the calculation of the risk of inpatient mortality. We assessed the recently proposed Acute Laboratory Risk of Mortality Score (ALaRMS) model, and further built generalized linear models, generalized estimating equations, artificial neural networks, and decision tree systems for the predictive modeling of the risk of inpatient mortality. The Area Under the ROC Curve (AUC) for ALaRMS marginally corresponded to the anticipated accuracy (AUC = 0.858). Penalized logistic regression methodology provided a better result (AUC = 0.872). Decision tree and neural network-based methodology provided even higher predictive performance (up to AUC = 0.912 and 0.906, respectively). Additionally, decision tree-based methods can efficiently handle Electronic Health Record (EHR) data that have a significant amount of missing records (in up to >50% of the studied features) eliminating the need for imputation in order to have complete data. In conclusion, we show that statistical learning methodology can provide superior predictive performance in comparison to existing methods and can also be production ready. Statistical modeling procedures provided unbiased, well-calibrated models that can be efficient decision support tools for predicting inpatient mortality and assigning preventive measures. PMID:27414408
Accuracy and Calibration of Computational Approaches for Inpatient Mortality Predictive Modeling
Nakas, Christos T.; Schütz, Narayan; Werners, Marcus; Leichtle, Alexander B.
2016-01-01
Electronic Health Record (EHR) data can be a key resource for decision-making support in clinical practice in the “big data” era. The complete database from early 2012 to late 2015 involving hospital admissions to Inselspital Bern, the largest Swiss University Hospital, was used in this study, involving over 100,000 admissions. Age, sex, and initial laboratory test results were the features/variables of interest for each admission, the outcome being inpatient mortality. Computational decision support systems were utilized for the calculation of the risk of inpatient mortality. We assessed the recently proposed Acute Laboratory Risk of Mortality Score (ALaRMS) model, and further built generalized linear models, generalized estimating equations, artificial neural networks, and decision tree systems for the predictive modeling of the risk of inpatient mortality. The Area Under the ROC Curve (AUC) for ALaRMS marginally corresponded to the anticipated accuracy (AUC = 0.858). Penalized logistic regression methodology provided a better result (AUC = 0.872). Decision tree and neural network-based methodology provided even higher predictive performance (up to AUC = 0.912 and 0.906, respectively). Additionally, decision tree-based methods can efficiently handle Electronic Health Record (EHR) data that have a significant amount of missing records (in up to >50% of the studied features) eliminating the need for imputation in order to have complete data. In conclusion, we show that statistical learning methodology can provide superior predictive performance in comparison to existing methods and can also be production ready. Statistical modeling procedures provided unbiased, well-calibrated models that can be efficient decision support tools for predicting inpatient mortality and assigning preventive measures. PMID:27414408
Optimization of computation efficiency in underwater acoustic navigation system.
Lee, Hua
2016-04-01
This paper presents a technique for the estimation of the relative bearing angle between the unmanned underwater vehicle (UUV) and the base station for the homing and docking operations. The key requirement of this project includes computation efficiency and estimation accuracy for direct implementation onto the UUV electronic hardware, subject to the extreme constraints of physical limitation of the hardware due to the size and dimension of the UUV housing, electric power consumption for the requirement of UUV survey duration and range coverage, and heat dissipation of the hardware. Subsequent to the design and development of the algorithm, two phases of experiments were conducted to illustrate the feasibility and capability of this technique. The presentation of this paper includes system modeling, mathematical analysis, and results from laboratory experiments and full-scale sea tests. PMID:27106337
Improving the Efficiency of Abdominal Aortic Aneurysm Wall Stress Computations
Zelaya, Jaime E.; Goenezen, Sevan; Dargon, Phong T.; Azarbal, Amir-Farzin; Rugonyi, Sandra
2014-01-01
An abdominal aortic aneurysm is a pathological dilation of the abdominal aorta, which carries a high mortality rate if ruptured. The most commonly used surrogate marker of rupture risk is the maximal transverse diameter of the aneurysm. More recent studies suggest that wall stress from models of patient-specific aneurysm geometries extracted, for instance, from computed tomography images may be a more accurate predictor of rupture risk and an important factor in AAA size progression. However, quantification of wall stress is typically computationally intensive and time-consuming, mainly due to the nonlinear mechanical behavior of the abdominal aortic aneurysm walls. These difficulties have limited the potential of computational models in clinical practice. To facilitate computation of wall stresses, we propose to use a linear approach that ensures equilibrium of wall stresses in the aneurysms. This proposed linear model approach is easy to implement and eliminates the burden of nonlinear computations. To assess the accuracy of our proposed approach to compute wall stresses, results from idealized and patient-specific model simulations were compared to those obtained using conventional approaches and to those of a hypothetical, reference abdominal aortic aneurysm model. For the reference model, wall mechanical properties and the initial unloaded and unstressed configuration were assumed to be known, and the resulting wall stresses were used as reference for comparison. Our proposed linear approach accurately approximates wall stresses for varying model geometries and wall material properties. Our findings suggest that the proposed linear approach could be used as an effective, efficient, easy-to-use clinical tool to estimate patient-specific wall stresses. PMID:25007052
Computer-aided high-accuracy testing of reflective surface with reverse Hartmann test.
Wang, Daodang; Zhang, Sen; Wu, Rengmao; Huang, Chih Yu; Cheng, Hsiang-Nan; Liang, Rongguang
2016-08-22
The deflectometry provides a feasible way for surface testing with a high dynamic range, and the calibration is a key issue in the testing. A computer-aided testing method based on reverse Hartmann test, a fringe-illumination deflectometry, is proposed for high-accuracy testing of reflective surfaces. The virtual "null" testing of surface error is achieved based on ray tracing of the modeled test system. Due to the off-axis configuration in the test system, it places ultra-high requirement on the calibration of system geometry. The system modeling error can introduce significant residual systematic error in the testing results, especially in the cases of convex surface and small working distance. A calibration method based on the computer-aided reverse optimization with iterative ray tracing is proposed for the high-accuracy testing of reflective surface. Both the computer simulation and experiments have been carried out to demonstrate the feasibility of the proposed measurement method, and good measurement accuracy has been achieved. The proposed method can achieve the measurement accuracy comparable to the interferometric method, even with the large system geometry calibration error, providing a feasible way to address the uncertainty on the calibration of system geometry. PMID:27557245
Computer-aided high-accuracy testing of reflective surface with reverse Hartmann test.
Wang, Daodang; Zhang, Sen; Wu, Rengmao; Huang, Chih Yu; Cheng, Hsiang-Nan; Liang, Rongguang
2016-08-22
The deflectometry provides a feasible way for surface testing with a high dynamic range, and the calibration is a key issue in the testing. A computer-aided testing method based on reverse Hartmann test, a fringe-illumination deflectometry, is proposed for high-accuracy testing of reflective surfaces. The virtual "null" testing of surface error is achieved based on ray tracing of the modeled test system. Due to the off-axis configuration in the test system, it places ultra-high requirement on the calibration of system geometry. The system modeling error can introduce significant residual systematic error in the testing results, especially in the cases of convex surface and small working distance. A calibration method based on the computer-aided reverse optimization with iterative ray tracing is proposed for the high-accuracy testing of reflective surface. Both the computer simulation and experiments have been carried out to demonstrate the feasibility of the proposed measurement method, and good measurement accuracy has been achieved. The proposed method can achieve the measurement accuracy comparable to the interferometric method, even with the large system geometry calibration error, providing a feasible way to address the uncertainty on the calibration of system geometry.
Has the use of computers in radiation therapy improved the accuracy in radiation dose delivery?
NASA Astrophysics Data System (ADS)
Van Dyk, J.; Battista, J.
2014-03-01
Purpose: It is well recognized that computer technology has had a major impact on the practice of radiation oncology. This paper addresses the question as to how these computer advances have specifically impacted the accuracy of radiation dose delivery to the patient. Methods: A review was undertaken of all the key steps in the radiation treatment process ranging from machine calibration to patient treatment verification and irradiation. Using a semi-quantitative scale, each stage in the process was analysed from the point of view of gains in treatment accuracy. Results: Our critical review indicated that computerization related to digital medical imaging (ranging from target volume localization, to treatment planning, to image-guided treatment) has had the most significant impact on the accuracy of radiation treatment. Conversely, the premature adoption of intensity-modulated radiation therapy has actually degraded the accuracy of dose delivery compared to 3-D conformal radiation therapy. While computational power has improved dose calibration accuracy through Monte Carlo simulations of dosimeter response parameters, the overall impact in terms of percent improvement is relatively small compared to the improvements accrued from 3-D/4-D imaging. Conclusions: As a result of computer applications, we are better able to see and track the internal anatomy of the patient before, during and after treatment. This has yielded the most significant enhancement to the knowledge of "in vivo" dose distributions in the patient. Furthermore, a much richer set of 3-D/4-D co-registered dose-image data is thus becoming available for retrospective analysis of radiobiological and clinical responses.
A computationally efficient approach for hidden-Markov model-augmented fingerprint-based positioning
NASA Astrophysics Data System (ADS)
Roth, John; Tummala, Murali; McEachen, John
2016-09-01
This paper presents a computationally efficient approach for mobile subscriber position estimation in wireless networks. A method of data scaling assisted by timing adjust is introduced in fingerprint-based location estimation under a framework which allows for minimising computational cost. The proposed method maintains a comparable level of accuracy to the traditional case where no data scaling is used and is evaluated in a simulated environment under varying channel conditions. The proposed scheme is studied when it is augmented by a hidden-Markov model to match the internal parameters to the channel conditions that present, thus minimising computational cost while maximising accuracy. Furthermore, the timing adjust quantity, available in modern wireless signalling messages, is shown to be able to further reduce computational cost and increase accuracy when available. The results may be seen as a significant step towards integrating advanced position-based modelling with power-sensitive mobile devices.
Efficient Computational Screening of Organic Polymer Photovoltaics.
Kanal, Ilana Y; Owens, Steven G; Bechtel, Jonathon S; Hutchison, Geoffrey R
2013-05-16
There has been increasing interest in rational, computationally driven design methods for materials, including organic photovoltaics (OPVs). Our approach focuses on a screening "pipeline", using a genetic algorithm for first stage screening and multiple filtering stages for further refinement. An important step forward is to expand our diversity of candidate compounds, including both synthetic and property-based measures of diversity. For example, top monomer pairs from our screening are all donor-donor (D-D) combinations, in contrast with the typical donor-acceptor (D-A) motif used in organic photovoltaics. We also find a strong "sequence effect", in which the average HOMO-LUMO gap of tetramers changes by ∼0.2 eV as a function of monomer sequence (e.g., ABBA versus BAAB); this has rarely been explored in conjugated polymers. Beyond such optoelectronic optimization, we discuss other properties needed for high-efficiency organic solar cells, and applications of screening methods to other areas, including non-fullerene n-type materials, tandem cells, and improving charge and exciton transport. PMID:26282968
Diagnostic Accuracy of Digital Screening Mammography with and without Computer-aided Detection
Lehman, Constance D.; Wellman, Robert D.; Buist, Diana S.M.; Kerlikowske, Karla; Tosteson, Anna N. A.; Miglioretti, Diana L.
2016-01-01
Importance After the Food and Drug Administration (FDA) approved computer-aided detection (CAD) for mammography in 1998, and Centers for Medicare and Medicaid Services (CMS) provided increased payment in 2002, CAD technology disseminated rapidly. Despite sparse evidence that CAD improves accuracy of mammographic interpretations, and costs over $400 million dollars a year, CAD is currently used for the majority of screening mammograms in the U.S. Objective To measure performance of digital screening mammography with and without computer-aided detection in U.S. community practice. Design, Setting and Participants We compared the accuracy of digital screening mammography interpreted with (N=495,818) vs. without (N=129,807) computer-aided detection from 2003 through 2009 in 323,973 women. Mammograms were interpreted by 271 radiologists from 66 facilities in the Breast Cancer Surveillance Consortium. Linkage with tumor registries identified 3,159 breast cancers in 323,973 women within one year of the screening. Main Outcomes and Measures Mammography performance (sensitivity, specificity, and screen detected and interval cancers per 1,000 women) was modeled using logistic regression with radiologist-specific random effects to account for correlation among examinations interpreted by the same radiologist, adjusting for patient age, race/ethnicity, time since prior mammogram, exam year, and registry. Conditional logistic regression was used to compare performance among 107 radiologists who interpreted mammograms both with and without computer-aided detection. Results Screening performance was not improved with computer-aided detection on any metric assessed. Mammography sensitivity was 85.3% (95% confidence interval [CI]=83.6–86.9) with and 87.3% (95% CI 84.5–89.7) without computer-aided detection. Specificity was 91.6% (95% CI=91.0–92.2) with and 91.4% (95% CI=90.6–92.0) without computer-aided detection. There was no difference in cancer detection rate (4
Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications
NASA Technical Reports Server (NTRS)
Sun, Xian-He
1997-01-01
Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm
One high-accuracy camera calibration algorithm based on computer vision images
NASA Astrophysics Data System (ADS)
Wang, Ying; Huang, Jianming; Wei, Xiangquan
2015-12-01
Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.
High-accuracy computation of Delta V magnitude probability densities - Preliminary remarks
NASA Technical Reports Server (NTRS)
Chadwick, C.
1986-01-01
This paper describes an algorithm for the high accuracy computation of some statistical quantities of the magnitude of a random trajectory correction maneuver (TCM). The trajectory correction velocity increment Delta V is assumed to be a three-component random vector with each component being a normally distributed random scalar having a possibly nonzero mean. Knowledge of the statitiscal properties of the magnitude of a random TCM is important in the planning and execution of maneuver strategies for deep-space missions such as Galileo. The current algorithm involves the numerical integration of a set of differential equations. This approach allows the computation of density functions for specific Delta V magnitude distributions to high accuracy without first having to generate large numbers of random samples. Possible applications of the algorithm to maneuver planning, planetary quarantine evaluation, and guidance success probability calculations are described.
Solving the Heterogeneous VHTR Core with Efficient Grid Computing
NASA Astrophysics Data System (ADS)
Connolly, Kevin John; Rahnema, Farzad
2014-06-01
This paper uses the coarse mesh transport method COMET to solve the eigenvalue and pin fission density distribution of the Very High Temperature Reactor (VHTR). It does this using the Boltzmann transport equation without such low-order approximations as diffusion, and it does not simplify the reactor core problem through homogenization techniques. This method is chosen as it makes highly efficient use of grid computing resources: it conducts a series of calculations at the block level using Monte Carlo to model the explicit geometry within the core without approximation, and compiles a compendium of data with the solution set. From there, it is able to solve the desired core configuration on a single processor in a fraction of the time necessary for whole-core deterministic or stochastic transport calculations. Thus, the method supplies a solution which has the accuracy of a whole-core Monte Carlo solution via the computing power available to the user. The core solved herein, the VHTR, was chosen due to its complexity. With a high level of detailed heterogeneity present from the core level to the pin level, and with asymmetric blocks and control material present outside of the fueled region of the core, this reactor geometry creates problems for methods which rely on homogenization or diffusion methods. Even transport methods find it challenging to solve. As it is desirable to reduce the number of assumptions necessary for a whole core calculation, this choice of reactor and solution method combination is an appropriate choice for a demonstration on an efficient use of grid computing.
ERIC Educational Resources Information Center
Dunn, Peter
2008-01-01
Quality encompasses a very broad range of ideas in learning materials, yet the accuracy of the content is often overlooked as a measure of quality. Various aspects of accuracy are briefly considered, and the issue of computational accuracy is then considered further. When learning materials are produced containing the results of mathematical…
The Comparison of Accuracy Scores on the Paper and Pencil Testing vs. Computer-Based Testing
ERIC Educational Resources Information Center
Retnawati, Heri
2015-01-01
This study aimed to compare the accuracy of the test scores as results of Test of English Proficiency (TOEP) based on paper and pencil test (PPT) versus computer-based test (CBT). Using the participants' responses to the PPT documented from 2008-2010 and data of CBT TOEP documented in 2013-2014 on the sets of 1A, 2A, and 3A for the Listening and…
Using additive manufacturing in accuracy evaluation of reconstructions from computed tomography.
Smith, Erin J; Anstey, Joseph A; Venne, Gabriel; Ellis, Randy E
2013-05-01
Bone models derived from patient imaging and fabricated using additive manufacturing technology have many potential uses including surgical planning, training, and research. This study evaluated the accuracy of bone surface reconstruction of two diarthrodial joints, the hip and shoulder, from computed tomography. Image segmentation of the tomographic series was used to develop a three-dimensional virtual model, which was fabricated using fused deposition modelling. Laser scanning was used to compare cadaver bones, printed models, and intermediate segmentations. The overall bone reconstruction process had a reproducibility of 0.3 ± 0.4 mm. Production of the model had an accuracy of 0.1 ± 0.1 mm, while the segmentation had an accuracy of 0.3 ± 0.4 mm, indicating that segmentation accuracy was the key factor in reconstruction. Generally, the shape of the articular surfaces was reproduced accurately, with poorer accuracy near the periphery of the articular surfaces, particularly in regions with periosteum covering and where osteophytes were apparent.
NASA Technical Reports Server (NTRS)
Kozakoff, D. J.; Schuchardt, J. M.; Ryan, C. E.
1980-01-01
The transmit beam and radiation efficiency for 10 metersquare subarray panels were quantified. Measurement performance potential of far field elevated and ground reflection ranges and near field technique were evaluated. The state-of-the-art of critical components and/or unique facilities required was identified. Relative cost, complexity and performance tradeoffs were performed for techniques capable of achieving accuracy objectives. It is considered that because of the large electrical size of the SPS subarray panels and the requirement for high accuracy measurements, specialized measurement facilities are required. Most critical measurement error sources have been identified for both conventional far field and near field techniques. Although the adopted error budget requires advances in state-of-the-art of microwave instrumentation, the requirements appear feasible based on extrapolation from today's technology. Additional performance and cost tradeoffs need to be completed before the choice of the preferred measurement technique is finalized.
NASA Technical Reports Server (NTRS)
Kozakoff, D. J.; Schuchardt, J. M.; Ryan, C. E.
1980-01-01
The relatively large apertures to be used in SPS, small half-power beamwidths, and the desire to accurately quantify antenna performance dictate the requirement for specialized measurements techniques. Objectives include the following: (1) For 10-meter square subarray panels, quantify considerations for measuring power in the transmit beam and radiation efficiency to + or - 1 percent (+ or - 0.04 dB) accuracy. (2) Evaluate measurement performance potential of far-field elevated and ground reflection ranges and near-field techniques. (3) Identify the state-of-the-art of critical components and/or unique facilities required. (4) Perform relative cost, complexity and performance tradeoffs for techniques capable of achieving accuracy objectives. the precision required by the techniques discussed below are not obtained by current methods which are capable of + or - 10 percent (+ or - dB) performance. In virtually every area associated with these planned measurements, advances in state-of-the-art are required.
Efficient quantum computing using coherent photon conversion.
Langford, N K; Ramelow, S; Prevedel, R; Munro, W J; Milburn, G J; Zeilinger, A
2011-10-12
Single photons are excellent quantum information carriers: they were used in the earliest demonstrations of entanglement and in the production of the highest-quality entanglement reported so far. However, current schemes for preparing, processing and measuring them are inefficient. For example, down-conversion provides heralded, but randomly timed, single photons, and linear optics gates are inherently probabilistic. Here we introduce a deterministic process--coherent photon conversion (CPC)--that provides a new way to generate and process complex, multiquanta states for photonic quantum information applications. The technique uses classically pumped nonlinearities to induce coherent oscillations between orthogonal states of multiple quantum excitations. One example of CPC, based on a pumped four-wave-mixing interaction, is shown to yield a single, versatile process that provides a full set of photonic quantum processing tools. This set satisfies the DiVincenzo criteria for a scalable quantum computing architecture, including deterministic multiqubit entanglement gates (based on a novel form of photon-photon interaction), high-quality heralded single- and multiphoton states free from higher-order imperfections, and robust, high-efficiency detection. It can also be used to produce heralded multiphoton entanglement, create optically switchable quantum circuits and implement an improved form of down-conversion with reduced higher-order effects. Such tools are valuable building blocks for many quantum-enabled technologies. Finally, using photonic crystal fibres we experimentally demonstrate quantum correlations arising from a four-colour nonlinear process suitable for CPC and use these measurements to study the feasibility of reaching the deterministic regime with current technology. Our scheme, which is based on interacting bosonic fields, is not restricted to optical systems but could also be implemented in optomechanical, electromechanical and superconducting
Bolstad, Erin S. D.; Anderson, Amy C.
2008-01-01
Representing receptors as ensembles of protein conformations during docking is a powerful method to approximate protein flexibility and increase the accuracy of the resulting ranked list of compounds. Unfortunately, docking compounds against a large number of ensemble members can increase computational cost and time investment. In this manuscript, we present an efficient method to evaluate and select the most contributive ensemble members prior to docking for targets with a conserved core of residues that bind a ligand moiety. We observed that ensemble members that preserve the geometry of the active site core are most likely to place ligands in the active site with a conserved orientation, generally rank ligands correctly and increase interactions with the receptor. A relative distance approach is used to quantify the preservation of the three-dimensional interatomic distances of the conserved ligand-binding atoms and prune large ensembles quickly. In this study, we investigate dihydrofolate reductase as an example of a protein with a conserved core; however, this method for accurately selecting relevant ensemble members a priori can be applied to any system with a conserved ligand-binding core, including HIV-1 protease, kinases and acetylcholinesterase. Representing a drug target as a pruned ensemble during in silico screening should increase the accuracy and efficiency of high throughput analyses of lead analogs. PMID:18781587
A computationally efficient Multicomponent Equilibrium Solver for Aerosols (MESA)
NASA Astrophysics Data System (ADS)
Zaveri, Rahul A.; Easter, Richard C.; Peters, Leonard K.
2005-12-01
Development and application of a new Multicomponent Equilibrium Solver for Aerosols (MESA) is described for systems containing H+, NH4+, Na+, Ca2+, SO42-, HSO4-, NO3-, and Cl- ions. The equilibrium solution is obtained by integrating a set of pseudo-transient ordinary differential equations describing the precipitation and dissolution reactions for all the possible salts to steady state. A comprehensive temperature dependent mutual deliquescence relative humidity (MDRH) parameterization is developed for all the possible salt mixtures, thereby eliminating the need for a rigorous numerical solution when ambient RH is less than MDRH(T). The solver is unconditionally stable, mass conserving, and shows robust convergence. Performance of MESA was evaluated against the Web-based AIM Model III, which served as a benchmark for accuracy, and the EQUISOLV II solver for speed. Important differences in the convergence and thermodynamic errors in MESA and EQUISOLV II are discussed. The average ratios of speeds of MESA over EQUISOLV II ranged between 1.4 and 5.8, with minimum and maximum ratios of 0.6 and 17, respectively. Because MESA directly diagnoses MDRH, it is significantly more efficient when RH < MDRH. MESA's superior performance is partially due to its "hard-wired" code for the present system as opposed to EQUISOLV II, which has a more generalized structure for solving any number and type of reactions at temperatures down to 190 K. These considerations suggest that MESA is highly attractive for use in 3-D aerosol/air-quality models for lower tropospheric applications (T > 240 K) in which both accuracy and computational efficiency are critical.
Quality and accuracy of cone beam computed tomography gated by active breathing control
Thompson, Bria P.; Hugo, Geoffrey D.
2008-12-15
The purpose of this study was to evaluate the quality and accuracy of cone beam computed tomography (CBCT) gated by active breathing control (ABC), which may be useful for image guidance in the presence of respiration. Comparisons were made between conventional ABC-CBCT (stop and go), fast ABC-CBCT (a method to speed up the acquisition by slowing the gantry instead of stopping during free breathing), and free breathing respiration correlated CBCT. Image quality was assessed in phantom. Accuracy of reconstructed voxel intensity, uniformity, and root mean square error were evaluated. Registration accuracy (bony and soft tissue) was quantified with both an anthropomorphic and a quality assurance phantom. Gantry angle accuracy was measured with respect to gantry speed modulation. Conventional ABC-CBCT scan time ranged from 2.3 to 5.8 min. Fast ABC-CBCT scan time ranged from 1.4 to 1.8 min, and respiratory correlated CBCT scans took 2.1 min to complete. Voxel intensity value for ABC gated scans was accurate relative to a normal clinical scan with all projections. Uniformity and root mean square error performance degraded as the number of projections used in the reconstruction of the fast ABC-CBCT scans decreased (shortest breath hold, longest free breathing segment). Registration accuracy for small, large, and rotational corrections was within 1 mm and 1 degree sign . Gantry angle accuracy was within 1 degree sign for all scans. For high-contrast targets, performance for image-guidance purposes was similar for fast and conventional ABC-CBCT scans and respiration correlated CBCT.
Zambrano, Eduardo; Šulc, Miroslav; Vaníček, Jiří
2013-08-07
Time-resolved electronic spectra can be obtained as the Fourier transform of a special type of time correlation function known as fidelity amplitude, which, in turn, can be evaluated approximately and efficiently with the dephasing representation. Here we improve both the accuracy of this approximation—with an amplitude correction derived from the phase-space propagator—and its efficiency—with an improved cellular scheme employing inverse Weierstrass transform and optimal scaling of the cell size. We demonstrate the advantages of the new methodology by computing dispersed time-resolved stimulated emission spectra in the harmonic potential, pyrazine, and the NCO molecule. In contrast, we show that in strongly chaotic systems such as the quartic oscillator the original dephasing representation is more appropriate than either the cellular or prefactor-corrected methods.
Accuracy of treatment planning based on stereolithography in computer assisted surgery.
Schicho, Kurt; Figl, Michael; Seemann, Rudolf; Ewers, Rolf; Lambrecht, J Thomas; Wagner, Arne; Watzinger, Franz; Baumann, Arnulf; Kainberger, Franz; Fruehwald, Julia; Klug, Clemens
2006-09-01
Three-dimensional stereolithographic models (SL models), made of solid acrylic resin derived from computed-tomography (CT) data, are an established tool for preoperative treatment planning in numerous fields of medicine. An innovative approach, combining stereolithography with computer-assisted point-to-point navigation, can support the precise surgical realization of a plan that has been defined on an SL model preoperatively. The essential prerequisites for the application of such an approach are: (1) The accuracy of the SL models (including accuracy of the CT scan and correspondence of the model with the patient's anatomy) and (2) the registration method used for the transfer of the plan from the SL model to the patient (i.e., whether the applied registration markers can be added to the SL model corresponding to the markers at the patient with an accuracy that keeps the "cumulative error" at the end of the chain of errors, in the order of the accuracy of contemporary navigation systems). In this study, we focus on these two topics: By applying image-matching techniques, we fuse the original CT data of the patient with the corresponding CT data of the scanned SL model, and measure the deviations of defined parameter (e.g., distances between anatomical points). To evaluate the registration method used for the planning transfer, we apply a point-merge algorithm, using four marker points that should be located at exactly corresponding positions at the patient and at connective bars that are added to the surface of the SL model. Again, deviations at defined anatomical structures are measured and analyzed statistically. Our results prove sufficient correspondence of the two data sets and accuracy of the registration method for routine clinical application. The evaluation of the SL model accuracy revealed an arithmetic mean of the relative deviations from 0.8% to 5.4%, with an overall mean deviation of 2.2%. Mean deviations of the investigated anatomical structures
Accuracy of treatment planning based on stereolithography in computer assisted surgery
Schicho, Kurt; Figl, Michael; Seemann, Rudolf; Ewers, Rolf; Lambrecht, J. Thomas; Wagner, Arne; Watzinger, Franz; Baumann, Arnulf; Kainberger, Franz; Fruehwald, Julia; Klug, Clemens
2006-09-15
Three-dimensional stereolithographic models (SL models), made of solid acrylic resin derived from computed-tomography (CT) data, are an established tool for preoperative treatment planning in numerous fields of medicine. An innovative approach, combining stereolithography with computer-assisted point-to-point navigation, can support the precise surgical realization of a plan that has been defined on an SL model preoperatively. The essential prerequisites for the application of such an approach are: (1) The accuracy of the SL models (including accuracy of the CT scan and correspondence of the model with the patient's anatomy) and (2) the registration method used for the transfer of the plan from the SL model to the patient (i.e., whether the applied registration markers can be added to the SL model corresponding to the markers at the patient with an accuracy that keeps the ''cumulative error'' at the end of the chain of errors, in the order of the accuracy of contemporary navigation systems). In this study, we focus on these two topics: By applying image-matching techniques, we fuse the original CT data of the patient with the corresponding CT data of the scanned SL model, and measure the deviations of defined parameter (e.g., distances between anatomical points). To evaluate the registration method used for the planning transfer, we apply a point-merge algorithm, using four marker points that should be located at exactly corresponding positions at the patient and at connective bars that are added to the surface of the SL model. Again, deviations at defined anatomical structures are measured and analyzed statistically. Our results prove sufficient correspondence of the two data sets and accuracy of the registration method for routine clinical application. The evaluation of the SL model accuracy revealed an arithmetic mean of the relative deviations from 0.8% to 5.4%, with an overall mean deviation of 2.2%. Mean deviations of the investigated anatomical structures
Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis.
Litjens, Geert; Sánchez, Clara I; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen-van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen
2016-01-01
Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce 'deep learning' as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30-40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that 'deep learning' holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging. PMID:27212078
Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis.
Litjens, Geert; Sánchez, Clara I; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen-van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen
2016-05-23
Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce 'deep learning' as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30-40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that 'deep learning' holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging.
Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis
NASA Astrophysics Data System (ADS)
Litjens, Geert; Sánchez, Clara I.; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen-van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen
2016-05-01
Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce ‘deep learning’ as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30–40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that ‘deep learning’ holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging.
A computational approach for prediction of donor splice sites with improved accuracy.
Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Rao, A R; Wahi, S D
2016-09-01
Identification of splice sites is important due to their key role in predicting the exon-intron structure of protein coding genes. Though several approaches have been developed for the prediction of splice sites, further improvement in the prediction accuracy will help predict gene structure more accurately. This paper presents a computational approach for prediction of donor splice sites with higher accuracy. In this approach, true and false splice sites were first encoded into numeric vectors and then used as input in artificial neural network (ANN), support vector machine (SVM) and random forest (RF) for prediction. ANN and SVM were found to perform equally and better than RF, while tested on HS3D and NN269 datasets. Further, the performance of ANN, SVM and RF were analyzed by using an independent test set of 50 genes and found that the prediction accuracy of ANN was higher than that of SVM and RF. All the predictors achieved higher accuracy while compared with the existing methods like NNsplice, MEM, MDD, WMM, MM1, FSPLICE, GeneID and ASSP, using the independent test set. We have also developed an online prediction server (PreDOSS) available at http://cabgrid.res.in:8080/predoss, for prediction of donor splice sites using the proposed approach. PMID:27302911
Diagnostic accuracy of computed tomography in detecting adrenal metastasis from primary lung cancer
Allard, P.
1988-01-01
The main study objective was to estimate the diagnostic accuracy of computed tomography (CT) for detection of adrenal metastases from primary lung cancer. A secondary study objective was to measure intra-reader and inter-reader agreement in interpretation of adrenal CT. Results were compared of CT film review and the autopsy findings of the adrenal glands. A five-level CT reading scale was used to assess the effect of various positivity criteria. The diagnostic accuracy of CT for detection of adrenal metastases was characterized by a tradeoff between specificity and sensitivity. At various positivity criteria, high specificity is traded against low sensitivity. The CT inability to detect many metastatic adrenals was related to frequent metastatic spread without morphologic changes of the gland.
Tensor scale: An analytic approach with efficient computation and applications☆
Xu, Ziyue; Saha, Punam K.; Dasgupta, Soura
2015-01-01
Scale is a widely used notion in computer vision and image understanding that evolved in the form of scale-space theory where the key idea is to represent and analyze an image at various resolutions. Recently, we introduced a notion of local morphometric scale referred to as “tensor scale” using an ellipsoidal model that yields a unified representation of structure size, orientation and anisotropy. In the previous work, tensor scale was described using a 2-D algorithmic approach and a precise analytic definition was missing. Also, the application of tensor scale in 3-D using the previous framework is not practical due to high computational complexity. In this paper, an analytic definition of tensor scale is formulated for n-dimensional (n-D) images that captures local structure size, orientation and anisotropy. Also, an efficient computational solution in 2- and 3-D using several novel differential geometric approaches is presented and the accuracy of results is experimentally examined. Also, a matrix representation of tensor scale is derived facilitating several operations including tensor field smoothing to capture larger contextual knowledge. Finally, the applications of tensor scale in image filtering and n-linear interpolation are presented and the performance of their results is examined in comparison with respective state-of-art methods. Specifically, the performance of tensor scale based image filtering is compared with gradient and Weickert’s structure tensor based diffusive filtering algorithms. Also, the performance of tensor scale based n-linear interpolation is evaluated in comparison with standard n-linear and windowed-sinc interpolation methods. PMID:26236148
Accuracy of measurements of mandibular anatomy in cone beam computed tomography images
Ludlow, John B.; Laster, William Stewart; See, Meit; Bailey, L’Tanya J.; Hershey, H. Garland
2013-01-01
Objectives Cone beam computed tomography (CBCT) images of ideally positioned and systematically mispositioned dry skulls were measured using two-dimensional and three-dimensional software measurement techniques. Image measurements were compared with caliper measurements of the skulls. Study design Cone beam computed tomography volumes of 28 skulls in ideal, shifted, and rotated positions were assessed by measuring distances between anatomic points and reference wires by using panoramic reconstructions (two-dimensional) and direct measurements from axial slices (three-dimensional). Differences between caliper measurements on skulls and software measurements in images were assessed with paired t tests and analysis of variance (ANOVA). Results Accuracy of measurement was not significantly affected by alterations in skull position or measurement of right or left sides. For easily visualized orthodontic wires, measurement accuracy was expressed by average errors less than 1.2% for two-dimensional measurement techniques and less than 0.6% for three-dimensional measurement techniques. Anatomic measurements were significantly more variable regardless of measurement technique. Conclusions Both two-dimensional and three-dimensional techniques provide acceptably accurate measurement of mandibular anatomy. Cone beam computed tomography measurement was not significantly influenced by variation in skull orientation during image acquisition. PMID:17395068
Computational Performance and Statistical Accuracy of *BEAST and Comparisons with Other Methods.
Ogilvie, Huw A; Heled, Joseph; Xie, Dong; Drummond, Alexei J
2016-05-01
Under the multispecies coalescent model of molecular evolution, gene trees have independent evolutionary histories within a shared species tree. In comparison, supermatrix concatenation methods assume that gene trees share a single common genealogical history, thereby equating gene coalescence with species divergence. The multispecies coalescent is supported by previous studies which found that its predicted distributions fit empirical data, and that concatenation is not a consistent estimator of the species tree. *BEAST, a fully Bayesian implementation of the multispecies coalescent, is popular but computationally intensive, so the increasing size of phylogenetic data sets is both a computational challenge and an opportunity for better systematics. Using simulation studies, we characterize the scaling behavior of *BEAST, and enable quantitative prediction of the impact increasing the number of loci has on both computational performance and statistical accuracy. Follow-up simulations over a wide range of parameters show that the statistical performance of *BEAST relative to concatenation improves both as branch length is reduced and as the number of loci is increased. Finally, using simulations based on estimated parameters from two phylogenomic data sets, we compare the performance of a range of species tree and concatenation methods to show that using *BEAST with tens of loci can be preferable to using concatenation with thousands of loci. Our results provide insight into the practicalities of Bayesian species tree estimation, the number of loci required to obtain a given level of accuracy and the situations in which supermatrix or summary methods will be outperformed by the fully Bayesian multispecies coalescent. PMID:26821913
Computational Performance and Statistical Accuracy of *BEAST and Comparisons with Other Methods
Ogilvie, Huw A.; Heled, Joseph; Xie, Dong; Drummond, Alexei J.
2016-01-01
Under the multispecies coalescent model of molecular evolution, gene trees have independent evolutionary histories within a shared species tree. In comparison, supermatrix concatenation methods assume that gene trees share a single common genealogical history, thereby equating gene coalescence with species divergence. The multispecies coalescent is supported by previous studies which found that its predicted distributions fit empirical data, and that concatenation is not a consistent estimator of the species tree. *BEAST, a fully Bayesian implementation of the multispecies coalescent, is popular but computationally intensive, so the increasing size of phylogenetic data sets is both a computational challenge and an opportunity for better systematics. Using simulation studies, we characterize the scaling behavior of *BEAST, and enable quantitative prediction of the impact increasing the number of loci has on both computational performance and statistical accuracy. Follow-up simulations over a wide range of parameters show that the statistical performance of *BEAST relative to concatenation improves both as branch length is reduced and as the number of loci is increased. Finally, using simulations based on estimated parameters from two phylogenomic data sets, we compare the performance of a range of species tree and concatenation methods to show that using *BEAST with tens of loci can be preferable to using concatenation with thousands of loci. Our results provide insight into the practicalities of Bayesian species tree estimation, the number of loci required to obtain a given level of accuracy and the situations in which supermatrix or summary methods will be outperformed by the fully Bayesian multispecies coalescent. PMID:26821913
Efficient Computation Of Confidence Intervals Of Parameters
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.
1992-01-01
Study focuses on obtaining efficient algorithm for estimation of confidence intervals of ML estimates. Four algorithms selected to solve associated constrained optimization problem. Hybrid algorithms, following search and gradient approaches, prove best.
Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy
NASA Technical Reports Server (NTRS)
Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)
2011-01-01
Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.
Jin, Wen-Ying; Zhao, Xiu-Juan; Chen, Hong
2016-01-01
Background: Multislice computed tomography (MSCT) coronary angiography (CAG) is a noninvasive technique with a reported high diagnostic accuracy for coronary artery disease (CAD). Women, more frequently than men, are known to develop atypical angina symptoms. The purpose of this study was to investigate whether the diagnostic accuracy of MSCT in women with atypical presentation differs from that in men. Methods: We enrolled 396 in-hospital patients (141 women and 255 men) with suspected or proven CAD who successively underwent both MSCT and invasive CAG. CAD was defined as any coronary stenosis of ≥50% on conventional invasive CAG, which was used as the reference standard. The patients were divided into typical and atypical groups based on their symptoms of angina pectoris. The diagnostic accuracy of MSCT, including its sensitivity, specificity, negative predictive value, and positive predictive value (PPV), was calculated to determine the usefulness of MSCT in assessing stenoses. The diagnostic performance of MSCT was also assessed by constructing receiver operating characteristic (ROC) curves. Results: The PPV (91% vs. 97%, χ2 = 5.705, P < 0.05) and diagnostic accuracy (87% vs. 93%, χ2 = 5.093, P < 0.05) of MSCT in detecting CAD were lower in women than in men. Atypical presentation was an independent influencing factor on the diagnostic accuracy of MSCT in women (odds ratio = 4.94, 95% confidence intervals: 1.16–20.92, Walds = 4.69, P < 0.05). Compared with those in the atypical group, women with typical angina pectoris had higher PPV (98% vs. 74%, χ2 = 17.283. P < 0.001), diagnostic accuracy (93% vs. 72%, χ2 = 9.571, P < 0.001), and area under the ROC curve (0.91 vs. 0.64, Z = 2.690, P < 0.01) in MSCT diagnosis. Conclusions: Although MSCT is a reliable diagnostic modality for the exclusion of significant coronary artery stenoses in all patients, gender and atypical symptoms might have some influence on its diagnostic accuracy. PMID:27625091
Accuracy of computer-aided template-guided oral implant placement: a prospective clinical study
2014-01-01
Purpose The aim of the present study was to evaluate the in vivo accuracy of flapless, computer-aided implant placement by comparing the three-dimensional (3D) position of planned and placed implants through an analysis of linear and angular deviations. Methods Implant position was virtually planned using 3D planning software based on the functional and aesthetic requirements of the final restorations. Computer-aided design/computer-assisted manufacture technology was used to transfer the virtual plan to the surgical environment. The 3D position of the planned and placed implants, in terms of the linear deviations of the implant head and apex and the angular deviations of the implant axis, was compared by overlapping the pre- and postoperative computed tomography scans using dedicated software. Results The comparison of 14 implants showed a mean linear deviation of the implant head of 0.56 mm (standard deviation [SD], 0.23), a mean linear deviation of the implant apex of 0.64 mm (SD, 0.29), and a mean angular deviation of the long axis of 2.42° (SD, 1.02). Conclusions In the present study, computer-aided flapless implant surgery seemed to provide several advantages to the clinicians as compared to the standard procedure; however, linear and angular deviations are to be expected. Therefore, accurate presurgical planning taking into account anatomical limitations and prosthetic demands is mandatory to ensure a predictable treatment, without incurring possible intra- and postoperative complications. Graphical Abstract PMID:25177520
Efficient Parallel Engineering Computing on Linux Workstations
NASA Technical Reports Server (NTRS)
Lou, John Z.
2010-01-01
A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).
Wong, Kent; Erdelyi, Bela; Schulte, Reinhard; Bashkirov, Vladimir; Coutrakon, George; Sadrozinski, Hartmut; Penfold, Scott; Rosenfeld, Anatoly
2009-03-10
Maintaining a high degree of spatial resolution in proton computed tomography (pCT) is a challenge due to the statistical nature of the proton path through the object. Recent work has focused on the formulation of the most likely path (MLP) of protons through a homogeneous water object and the accuracy of this approach has been tested experimentally with a homogeneous PMMA phantom. Inhomogeneities inside the phantom, consisting of, for example, air and bone will lead to unavoidable inaccuracies of this approach. The purpose of this ongoing work is to characterize systematic errors that are introduced by regions of bone and air density and how this affects the accuracy of proton CT in surrounding voxels both in terms of spatial and density reconstruction accuracy. Phantoms containing tissue-equivalent inhomogeneities have been designed and proton transport through them has been simulated with the GEANT 4.9.0 Monte Carlo tool kit. Various iterative reconstruction techniques, including the classical fully sequential algebraic reconstruction technique (ART) and block-iterative techniques, are currently being tested, and we will select the most accurate method for this study.
NASA Astrophysics Data System (ADS)
Tan, Sirui; Huang, Lianjie
2014-05-01
For modelling large-scale 3-D scalar-wave propagation, the finite-difference (FD) method with high-order accuracy in space but second-order accuracy in time is widely used because of its relatively low requirements of computer memory. We develop a novel staggered-grid (SG) FD method with high-order accuracy not only in space, but also in time, for solving 2- and 3-D scalar-wave equations. We determine the coefficients of the FD operator in the joint time-space domain to achieve high-order accuracy in time while preserving high-order accuracy in space. Our new FD scheme is based on a stencil that contains a few more grid points than the standard stencil. It is 2M-th-order accurate in space and fourth-order accurate in time when using 2M grid points along each axis and wavefields at one time step as the standard SGFD method. We validate the accuracy and efficiency of our new FD scheme using dispersion analysis and numerical modelling of scalar-wave propagation in 2- and 3-D complex models with a wide range of velocity contrasts. For media with a velocity contrast up to five, our new FD scheme is approximately two times more computationally efficient than the standard SGFD scheme with almost the same computer-memory requirement as the latter. Further numerical experiments demonstrate that our new FD scheme loses its advantages over the standard SGFD scheme if the velocity contrast is 10. However, for most large-scale geophysical applications, the velocity contrasts often range approximately from 1 to 3. Our new method is thus particularly useful for large-scale 3-D scalar-wave modelling and full-waveform inversion.
NASA Technical Reports Server (NTRS)
Vlassak, Irmien; Rubin, David N.; Odabashian, Jill A.; Garcia, Mario J.; King, Lisa M.; Lin, Steve S.; Drinko, Jeanne K.; Morehead, Annitta J.; Prior, David L.; Asher, Craig R.; Klein, Allan L.; Thomas, James D.
2002-01-01
BACKGROUND: Newer contrast agents as well as tissue harmonic imaging enhance left ventricular (LV) endocardial border delineation, and therefore, improve LV wall-motion analysis. Interpretation of dobutamine stress echocardiography is observer-dependent and requires experience. This study was performed to evaluate whether these new imaging modalities would improve endocardial visualization and enhance accuracy and efficiency of the inexperienced reader interpreting dobutamine stress echocardiography. METHODS AND RESULTS: Twenty-nine consecutive patients with known or suspected coronary artery disease underwent dobutamine stress echocardiography. Both fundamental (2.5 MHZ) and harmonic (1.7 and 3.5 MHZ) mode images were obtained in four standard views at rest and at peak stress during a standard dobutamine infusion stress protocol. Following the noncontrast images, Optison was administered intravenously in bolus (0.5-3.0 ml), and fundamental and harmonic images were obtained. The dobutamine echocardiography studies were reviewed by one experienced and one inexperienced echocardiographer. LV segments were graded for image quality and function. Time for interpretation also was recorded. Contrast with harmonic imaging improved the diagnostic concordance of the novice reader to the expert reader by 7.1%, 7.5%, and 12.6% (P < 0.001) as compared with harmonic imaging, fundamental imaging, and fundamental imaging with contrast, respectively. For the novice reader, reading time was reduced by 47%, 55%, and 58% (P < 0.005) as compared with the time needed for fundamental, fundamental contrast, and harmonic modes, respectively. With harmonic imaging, the image quality score was 4.6% higher (P < 0.001) than for fundamental imaging. Image quality scores were not significantly different for noncontrast and contrast images. CONCLUSION: Harmonic imaging with contrast significantly improves the accuracy and efficiency of the novice dobutamine stress echocardiography reader. The use
Efficient Computation Of Manipulator Inertia Matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1991-01-01
Improved method for computation of manipulator inertia matrix developed, based on concept of spatial inertia of composite rigid body. Required for implementation of advanced dynamic-control schemes as well as dynamic simulation of manipulator motion. Motivated by increasing demand for fast algorithms to provide real-time control and simulation capability and, particularly, need for faster-than-real-time simulation capability, required in many anticipated space teleoperation applications.
Ippolito, Davide; Drago, Silvia Girolama; Franzesi, Cammillo Talei; Fior, Davide; Sironi, Sandro
2016-01-01
AIM: To assess the diagnostic accuracy of multidetector-row computed tomography (MDCT) as compared with conventional magnetic resonance imaging (MRI), in identifying mesorectal fascia (MRF) invasion in rectal cancer patients. METHODS: Ninety-one patients with biopsy proven rectal adenocarcinoma referred for thoracic and abdominal CT staging were enrolled in this study. The contrast-enhanced MDCT scans were performed on a 256 row scanner (ICT, Philips) with the following acquisition parameters: tube voltage 120 KV, tube current 150-300 mAs. Imaging data were reviewed as axial and as multiplanar reconstructions (MPRs) images along the rectal tumor axis. MRI study, performed on 1.5 T with dedicated phased array multicoil, included multiplanar T2 and axial T1 sequences and diffusion weighted images (DWI). Axial and MPR CT images independently were compared to MRI and MRF involvement was determined. Diagnostic accuracy of both modalities was compared and statistically analyzed. RESULTS: According to MRI, the MRF was involved in 51 patients and not involved in 40 patients. DWI allowed to recognize the tumor as a focal mass with high signal intensity on high b-value images, compared with the signal of the normal adjacent rectal wall or with the lower tissue signal intensity background. The number of patients correctly staged by the native axial CT images was 71 out of 91 (41 with involved MRF; 30 with not involved MRF), while by using the MPR 80 patients were correctly staged (45 with involved MRF; 35 with not involved MRF). Local tumor staging suggested by MDCT agreed with those of MRI, obtaining for CT axial images sensitivity and specificity of 80.4% and 75%, positive predictive value (PPV) 80.4%, negative predictive value (NPV) 75% and accuracy 78%; while performing MPR the sensitivity and specificity increased to 88% and 87.5%, PPV was 90%, NPV 85.36% and accuracy 88%. MPR images showed higher diagnostic accuracy, in terms of MRF involvement, than native axial images
Student accuracy and evaluation of a computer-based audience response system.
Holmes, Robert G; Blalock, John S; Parker, Merle H; Haywood, Van B
2006-12-01
We have incorporated an audience response system into our curriculum to increase student interaction in the teaching process. Classroom Performance System (CPS) is a computer-based audience response system that allows students to answer questions posed to the entire class by entering responses on a keypad. The responses are tallied and displayed on the classroom screen for all students to see. The purpose of our study was to determine student accuracy using the system with three different methods of administering questions. A secondary purpose was to assess students' perceptions about using the system. Our hypothesis for the study was that there should be no difference in volunteer accuracy or questionnaire responses to the three methods of gathering responses. Sixty-two dental students volunteered to participate. Using three methods (projected on a screen, verbal, and written), volunteers were given "responses" to enter into the system using CPS wireless remote answering devices. In the projected and verbal formats, the teacher managed the assessment by controlling the pace of input. In the written format, students were given responses on paper to input into the system at their own pace. At the end of the sessions, volunteers completed an anonymous questionnaire regarding their experiences with the system. The accuracy of responses was similar in the teacher-managed assessments (projected and verbal format). There was a statistical difference in the accuracy of responses in the student-managed assessment (p=<0.000001). Questionnaire responses also showed that students preferred teacher-managed assessments. The hypothesis was disproved. The overall response to this audience response system and its methods of gathering information was very positive.
Evaluation of the Accuracy of Computer-Guided Mandibular Fracture Reduction.
el-Gengehi, Mostafa; Seif, Sameh A
2015-07-01
The aim of the current study was to evaluate the accuracy of computer-guided mandibular fracture reduction. A total of 24 patients with fractured mandible were included in the current study. A preoperative cone beam computed tomography (CBCT) scan was performed on all of the patients. Based on CBCT, three-dimensional reconstruction and virtual reduction of the mandibular fracture segments were done and a virtual bone borne surgical guide was designed and exported as Standard Tessellation Language file. A physical guide was then fabricated using a three-dimensional printing machine. Open reduction and internal fixation was done for all of the patients and the fracture segments were anatomically reduced with the aid of the custom-fabricated surgical guide. Postoperative CBCT was performed after 7 days and results of which were compared with the virtually reduced preoperative mandibular models. Comparison of values of lingula-sagittal plane, inferior border-sagittal plane, and anteroposterior measurements revealed no statistically significant differences between the virtual and the clinically reduced CBCT models. Based on the results of the current study, computer-based surgical guide aid in obtaining accurate anatomical reduction of the displaced mandibular fractured segments. Moreover, the computer-based surgical guides were found to be beneficial in reducing fractures of completely and partially edentulous mandibles.
NASA Astrophysics Data System (ADS)
Zheng, Bin; Pu, Jiantao; Park, Sang Cheol; Zuley, Margarita; Gur, David
2008-03-01
In this study we randomly select 250 malignant and 250 benign mass regions as a training dataset. The boundary contours of these regions were manually identified and marked. Twelve image features were computed for each region. An artificial neural network (ANN) was trained as a classifier. To select a specific testing dataset, we applied a topographic multi-layer region growth algorithm to detect boundary contours of 1,903 mass regions in an initial pool of testing regions. All processed regions are sorted based on a size difference ratio between manual and automated segmentation. We selected a testing dataset involving 250 malignant and 250 benign mass regions with larger size difference ratios. Using the area under ROC curve (A Z value) as performance index we investigated the relationship between the accuracy of mass segmentation and the performance of a computer-aided diagnosis (CAD) scheme. CAD performance degrades as the size difference ratio increases. Then, we developed and tested a hybrid region growth algorithm that combined the topographic region growth with an active contour approach. In this hybrid algorithm, the boundary contour detected by the topographic region growth is used as the initial contour of the active contour algorithm. The algorithm iteratively searches for the optimal region boundaries. A CAD likelihood score of the growth region being a true-positive mass is computed in each iteration. The region growth is automatically terminated once the first maximum CAD score is reached. This hybrid region growth algorithm reduces the size difference ratios between two areas segmented automatically and manually to less than +/-15% for all testing regions and the testing A Z value increases to from 0.63 to 0.90. The results indicate that CAD performance heavily depends on the accuracy of mass segmentation. In order to achieve robust CAD performance, reducing lesion segmentation error is important.
Cristescu, Romane H; Foley, Emily; Markula, Anna; Jackson, Gary; Jones, Darryl; Frère, Céline
2015-01-01
Accurate data on presence/absence and spatial distribution for fauna species is key to their conservation. Collecting such data, however, can be time consuming, laborious and costly, in particular for fauna species characterised by low densities, large home ranges, cryptic or elusive behaviour. For such species, including koalas (Phascolarctos cinereus), indicators of species presence can be a useful shortcut: faecal pellets (scats), for instance, are widely used. Scat surveys are not without their difficulties and often contain a high false negative rate. We used experimental and field-based trials to investigate the accuracy and efficiency of the first dog specifically trained for koala scats. The detection dog consistently out-performed human-only teams. Off-leash, the dog detection rate was 100%. The dog was also 19 times more efficient than current scat survey methods and 153% more accurate (the dog found koala scats where the human-only team did not). This clearly demonstrates that the use of detection dogs decreases false negatives and survey time, thus allowing for a significant improvement in the quality and quantity of data collection. Given these unequivocal results, we argue that to improve koala conservation, detection dog surveys for koala scats could in the future replace human-only teams.
Accuracy and efficiency of detection dogs: a powerful new tool for koala conservation and management
Cristescu, Romane H.; Foley, Emily; Markula, Anna; Jackson, Gary; Jones, Darryl; Frère, Céline
2015-01-01
Accurate data on presence/absence and spatial distribution for fauna species is key to their conservation. Collecting such data, however, can be time consuming, laborious and costly, in particular for fauna species characterised by low densities, large home ranges, cryptic or elusive behaviour. For such species, including koalas (Phascolarctos cinereus), indicators of species presence can be a useful shortcut: faecal pellets (scats), for instance, are widely used. Scat surveys are not without their difficulties and often contain a high false negative rate. We used experimental and field-based trials to investigate the accuracy and efficiency of the first dog specifically trained for koala scats. The detection dog consistently out-performed human-only teams. Off-leash, the dog detection rate was 100%. The dog was also 19 times more efficient than current scat survey methods and 153% more accurate (the dog found koala scats where the human-only team did not). This clearly demonstrates that the use of detection dogs decreases false negatives and survey time, thus allowing for a significant improvement in the quality and quantity of data collection. Given these unequivocal results, we argue that to improve koala conservation, detection dog surveys for koala scats could in the future replace human-only teams. PMID:25666691
Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis
Litjens, Geert; Sánchez, Clara I.; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen - van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen
2016-01-01
Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce ‘deep learning’ as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30–40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that ‘deep learning’ holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging. PMID:27212078
Cristescu, Romane H; Foley, Emily; Markula, Anna; Jackson, Gary; Jones, Darryl; Frère, Céline
2015-01-01
Accurate data on presence/absence and spatial distribution for fauna species is key to their conservation. Collecting such data, however, can be time consuming, laborious and costly, in particular for fauna species characterised by low densities, large home ranges, cryptic or elusive behaviour. For such species, including koalas (Phascolarctos cinereus), indicators of species presence can be a useful shortcut: faecal pellets (scats), for instance, are widely used. Scat surveys are not without their difficulties and often contain a high false negative rate. We used experimental and field-based trials to investigate the accuracy and efficiency of the first dog specifically trained for koala scats. The detection dog consistently out-performed human-only teams. Off-leash, the dog detection rate was 100%. The dog was also 19 times more efficient than current scat survey methods and 153% more accurate (the dog found koala scats where the human-only team did not). This clearly demonstrates that the use of detection dogs decreases false negatives and survey time, thus allowing for a significant improvement in the quality and quantity of data collection. Given these unequivocal results, we argue that to improve koala conservation, detection dog surveys for koala scats could in the future replace human-only teams. PMID:25666691
Skyline View: Efficient Distributed Subspace Skyline Computation
NASA Astrophysics Data System (ADS)
Kim, Jinhan; Lee, Jongwuk; Hwang, Seung-Won
Skyline queries have gained much attention as alternative query semantics with pros (e.g.low query formulation overhead) and cons (e.g.large control over result size). To overcome the cons, subspace skyline queries have been recently studied, where users iteratively specify relevant feature subspaces on search space. However, existing works mainly focuss on centralized databases. This paper aims to extend subspace skyline computation to distributed environments such as the Web, where the most important issue is to minimize the cost of accessing vertically distributed objects. Toward this goal, we exploit prior skylines that have overlapped subspaces to the given subspace. In particular, we develop algorithms for three scenarios- when the subspace of prior skylines is superspace, subspace, or the rest. Our experimental results validate that our proposed algorithm shows significantly better performance than the state-of-the-art algorithms.
NASA Technical Reports Server (NTRS)
White, C. W.
1981-01-01
The computational efficiency of the impedance type loads prediction method was studied. Three goals were addressed: devise a method to make the impedance method operate more efficiently in the computer; assess the accuracy and convenience of the method for determining the effect of design changes; and investigate the use of the method to identify design changes for reduction of payload loads. The method is suitable for calculation of dynamic response in either the frequency or time domain. It is concluded that: the choice of an orthogonal coordinate system will allow the impedance method to operate more efficiently in the computer; the approximate mode impedance technique is adequate for determining the effect of design changes, and is applicable for both statically determinate and statically indeterminate payload attachments; and beneficial design changes to reduce payload loads can be identified by the combined application of impedance techniques and energy distribution review techniques.
A computable expression of closure to efficient causation.
Mossio, Matteo; Longo, Giuseppe; Stewart, John
2009-04-01
In this paper, we propose a mathematical expression of closure to efficient causation in terms of lambda-calculus; we argue that this opens up the perspective of developing principled computer simulations of systems closed to efficient causation in an appropriate programming language. An important implication of our formulation is that, by exhibiting an expression in lambda-calculus, which is a paradigmatic formalism for computability and programming, we show that there are no conceptual or principled problems in realizing a computer simulation or model of closure to efficient causation. We conclude with a brief discussion of the question whether closure to efficient causation captures all relevant properties of living systems. We suggest that it might not be the case, and that more complex definitions could indeed create crucial some obstacles to computability.
Duality quantum computer and the efficient quantum simulations
NASA Astrophysics Data System (ADS)
Wei, Shi-Jie; Long, Gui-Lu
2016-03-01
Duality quantum computing is a new mode of a quantum computer to simulate a moving quantum computer passing through a multi-slit. It exploits the particle wave duality property for computing. A quantum computer with n qubits and a qudit simulates a moving quantum computer with n qubits passing through a d-slit. Duality quantum computing can realize an arbitrary sum of unitaries and therefore a general quantum operator, which is called a generalized quantum gate. All linear bounded operators can be realized by the generalized quantum gates, and unitary operators are just the extreme points of the set of generalized quantum gates. Duality quantum computing provides flexibility and a clear physical picture in designing quantum algorithms, and serves as a powerful bridge between quantum and classical algorithms. In this paper, after a brief review of the theory of duality quantum computing, we will concentrate on the applications of duality quantum computing in simulations of Hamiltonian systems. We will show that duality quantum computing can efficiently simulate quantum systems by providing descriptions of the recent efficient quantum simulation algorithm of Childs and Wiebe (Quantum Inf Comput 12(11-12):901-924, 2012) for the fast simulation of quantum systems with a sparse Hamiltonian, and the quantum simulation algorithm by Berry et al. (Phys Rev Lett 114:090502, 2015), which provides exponential improvement in precision for simulating systems with a sparse Hamiltonian.
NASA Astrophysics Data System (ADS)
Lam, Walter Y. H.; Ngan, Henry Y. T.; Wat, Peter Y. P.; Luk, Henry W. K.; Goto, Tazuko K.; Pow, Edmond H. N.
2015-02-01
Medical radiography is the use of radiation to "see through" a human body without breaching its integrity (surface). With computed tomography (CT)/cone beam computed tomography (CBCT), three-dimensional (3D) imaging can be produced. These imagings not only facilitate disease diagnosis but also enable computer-aided surgical planning/navigation. In dentistry, the common method for transfer of the virtual surgical planning to the patient (reality) is the use of surgical stent either with a preloaded planning (static) like a channel or a real time surgical navigation (dynamic) after registration with fiducial markers (RF). This paper describes using the corner of a cube as a radiopaque fiducial marker on an acrylic (plastic) stent, this RF allows robust calibration and registration of Cartesian (x, y, z)- coordinates for linking up the patient (reality) and the imaging (virtuality) and hence the surgical planning can be transferred in either static or dynamic way. The accuracy of computer-aided implant surgery was measured with reference to coordinates. In our preliminary model surgery, a dental implant was planned virtually and placed with preloaded surgical guide. The deviation of the placed implant apex from the planning was x=+0.56mm [more right], y=- 0.05mm [deeper], z=-0.26mm [more lingual]) which was within clinically 2mm safety range. For comparison with the virtual planning, the physically placed implant was CT/CBCT scanned and errors may be introduced. The difference of the actual implant apex to the virtual apex was x=0.00mm, y=+0.21mm [shallower], z=-1.35mm [more lingual] and this should be brought in mind when interpret the results.
Texture functions in image analysis: A computationally efficient solution
NASA Technical Reports Server (NTRS)
Cox, S. C.; Rose, J. F.
1983-01-01
A computationally efficient means for calculating texture measurements from digital images by use of the co-occurrence technique is presented. The calculation of the statistical descriptors of image texture and a solution that circumvents the need for calculating and storing a co-occurrence matrix are discussed. The results show that existing efficient algorithms for calculating sums, sums of squares, and cross products can be used to compute complex co-occurrence relationships directly from the digital image input.
Computationally efficient Bayesian inference for inverse problems.
Marzouk, Youssef M.; Najm, Habib N.; Rahn, Larry A.
2007-10-01
Bayesian statistics provides a foundation for inference from noisy and incomplete data, a natural mechanism for regularization in the form of prior information, and a quantitative assessment of uncertainty in the inferred results. Inverse problems - representing indirect estimation of model parameters, inputs, or structural components - can be fruitfully cast in this framework. Complex and computationally intensive forward models arising in physical applications, however, can render a Bayesian approach prohibitive. This difficulty is compounded by high-dimensional model spaces, as when the unknown is a spatiotemporal field. We present new algorithmic developments for Bayesian inference in this context, showing strong connections with the forward propagation of uncertainty. In particular, we introduce a stochastic spectral formulation that dramatically accelerates the Bayesian solution of inverse problems via rapid evaluation of a surrogate posterior. We also explore dimensionality reduction for the inference of spatiotemporal fields, using truncated spectral representations of Gaussian process priors. These new approaches are demonstrated on scalar transport problems arising in contaminant source inversion and in the inference of inhomogeneous material or transport properties. We also present a Bayesian framework for parameter estimation in stochastic models, where intrinsic stochasticity may be intermingled with observational noise. Evaluation of a likelihood function may not be analytically tractable in these cases, and thus several alternative Markov chain Monte Carlo (MCMC) schemes, operating on the product space of the observations and the parameters, are introduced.
Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees
2015-03-15
Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.
Waitzman, A A; Posnick, J C; Armstrong, D C; Pron, G E
1992-03-01
Computed tomography (CT) is a useful modality for the management of craniofacial anomalies. A study was undertaken to assess whether CT measurements of the upper craniofacial skeleton accurately represent the bony region imaged. Measurements taken directly from five dry skulls (approximate ages: adults, over 18 years; child, 4 years; infant, 6 months) were compared to those from axial CT scans of these skulls. Excellent agreement was found between the direct (dry skull) and indirect (CT) measurements. The effect of head tilt on the accuracy of these measurements was investigated. The error was within clinically acceptable limits (less than 5 percent) if the angle was no more than +/- 4 degrees from baseline (0 degrees). Objective standardized information gained from CT should complement the subjective clinical data usually collected for the treatment of craniofacial deformities. PMID:1571344
To address accuracy and precision using methods from analytical chemistry and computational physics.
Kozmutza, Cornelia; Picó, Yolanda
2009-04-01
In this work the pesticides were determined by liquid chromatography-mass spectrometry (LC-MS). In present study the occurrence of imidacloprid in 343 samples of oranges, tangerines, date plum, and watermelons from Valencian Community (Spain) has been investigated. The nine additional pesticides were chosen as they have been recommended for orchard treatment together with imidacloprid. The Mulliken population analysis has been applied to present the charge distribution in imidacloprid. Partitioned energy terms and the virial ratios have been calculated for certain molecules entering in interaction. A new technique based on the comparison of the decomposed total energy terms at various configurations is demonstrated in this work. The interaction ability could be established correctly in the studied case. An attempt is also made in this work to address accuracy and precision. These quantities are well-known in experimental measurements. In case precise theoretical description is achieved for the contributing monomers and also for the interacting complex structure some properties of this latter system can be predicted to quite a good accuracy. Based on simple hypothetical considerations we estimate the impact of applying computations on reducing the amount of analytical work.
Ortuño, Francisco M.; Valenzuela, Olga; Pomares, Hector; Rojas, Fernando; Florido, Javier P.; Urquiza, Jose M.
2013-01-01
Multiple sequence alignments (MSAs) have become one of the most studied approaches in bioinformatics to perform other outstanding tasks such as structure prediction, biological function analysis or next-generation sequencing. However, current MSA algorithms do not always provide consistent solutions, since alignments become increasingly difficult when dealing with low similarity sequences. As widely known, these algorithms directly depend on specific features of the sequences, causing relevant influence on the alignment accuracy. Many MSA tools have been recently designed but it is not possible to know in advance which one is the most suitable for a particular set of sequences. In this work, we analyze some of the most used algorithms presented in the bibliography and their dependences on several features. A novel intelligent algorithm based on least square support vector machine is then developed to predict how accurate each alignment could be, depending on its analyzed features. This algorithm is performed with a dataset of 2180 MSAs. The proposed system first estimates the accuracy of possible alignments. The most promising methodologies are then selected in order to align each set of sequences. Since only one selected algorithm is run, the computational time is not excessively increased. PMID:23066102
Computationally efficient finite element evaluation of natural patellofemoral mechanics.
Fitzpatrick, Clare K; Baldwin, Mark A; Rullkoetter, Paul J
2010-12-01
Finite element methods have been applied to evaluate in vivo joint behavior, new devices, and surgical techniques but have typically been applied to a small or single subject cohort. Anatomic variability necessitates the use of many subject-specific models or probabilistic methods in order to adequately evaluate a device or procedure for a population. However, a fully deformable finite element model can be computationally expensive, prohibiting large multisubject or probabilistic analyses. The aim of this study was to develop a group of subject-specific models of the patellofemoral joint and evaluate trade-offs in analysis time and accuracy with fully deformable and rigid body articular cartilage representations. Finite element models of eight subjects were used to tune a pressure-overclosure relationship during a simulated deep flexion cycle. Patellofemoral kinematics and contact mechanics were evaluated and compared between a fully deformable and a rigid body analysis. Additional eight subjects were used to determine the validity of the rigid body pressure-overclosure relationship as a subject-independent parameter. There was good agreement in predicted kinematics and contact mechanics between deformable and rigid analyses for both the tuned and test groups. Root mean square differences in kinematics were less than 0.5 deg and 0.2 mm for both groups throughout flexion. Differences in contact area and peak and average contact pressures averaged 5.4%, 9.6%, and 3.8%, respectively, for the tuned group and 6.9%, 13.1%, and 6.4%, respectively, for the test group, with no significant differences between the two groups. There was a 95% reduction in computational time with the rigid body analysis as compared with the deformable analysis. The tuned pressure-overclosure relationship derived from the patellofemoral analysis was also applied to tibiofemoral (TF) articular cartilage in a group of eight subjects. Differences in contact area and peak and average contact
Earthquake detection through computationally efficient similarity search.
Yoon, Clara E; O'Reilly, Ossian; Bergen, Karianne J; Beroza, Gregory C
2015-12-01
Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection-identification of seismic events in continuous data-is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact "fingerprints" of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176
Earthquake detection through computationally efficient similarity search
Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.
2015-01-01
Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176
Earthquake detection through computationally efficient similarity search.
Yoon, Clara E; O'Reilly, Ossian; Bergen, Karianne J; Beroza, Gregory C
2015-12-01
Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection-identification of seismic events in continuous data-is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact "fingerprints" of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes.
NASA Astrophysics Data System (ADS)
Thomson, C. J.
2005-10-01
Several observations are made concerning the numerical implementation of wide-angle one-way wave equations, using for illustration scalar waves obeying the Helmholtz equation in two space dimensions. This simple case permits clear identification of a sequence of physically motivated approximations of use when the mathematically exact pseudo-differential operator (PSDO) one-way method is applied. As intuition suggests, these approximations largely depend on the medium gradients in the direction transverse to the main propagation direction. A key point is that narrow-angle approximations are to be avoided in the interests of accuracy. Another key consideration stems from the fact that the so-called `standard-ordering' PSDO indicates how lateral interpolation of the velocity structure can significantly reduce computational costs associated with the Fourier or plane-wave synthesis lying at the heart of the calculations. A third important point is that the PSDO theory shows what approximations are necessary in order to generate an exponential one-way propagator for the laterally varying case, representing the intuitive extension of classical integral-transform solutions for a laterally homogeneous medium. This exponential propagator permits larger forward stepsizes. Numerical comparisons with Helmholtz (i.e. full) wave-equation finite-difference solutions are presented for various canonical problems. These include propagation along an interfacial gradient, the effects of a compact inclusion and the formation of extended transmitted and backscattered wave trains by model roughness. The ideas extend to the 3-D, generally anisotropic case and to multiple scattering by invariant embedding. It is concluded that the method is very competitive, striking a new balance between simplifying approximations and computational labour. Complicated wave-scattering effects are retained without the need for expensive global solutions, providing a robust and flexible modelling tool.
Computer-aided analysis of star shot films for high-accuracy radiation therapy treatment units
NASA Astrophysics Data System (ADS)
Depuydt, Tom; Penne, Rudi; Verellen, Dirk; Hrbacek, Jan; Lang, Stephanie; Leysen, Katrien; Vandevondel, Iwein; Poels, Kenneth; Reynders, Truus; Gevaert, Thierry; Duchateau, Michael; Tournel, Koen; Boussaer, Marlies; Cosentino, Dorian; Garibaldi, Cristina; Solberg, Timothy; De Ridder, Mark
2012-05-01
As mechanical stability of radiation therapy treatment devices has gone beyond sub-millimeter levels, there is a rising demand for simple yet highly accurate measurement techniques to support the routine quality control of these devices. A combination of using high-resolution radiosensitive film and computer-aided analysis could provide an answer. One generally known technique is the acquisition of star shot films to determine the mechanical stability of rotations of gantries and the therapeutic beam. With computer-aided analysis, mechanical performance can be quantified as a radiation isocenter radius size. In this work, computer-aided analysis of star shot film is further refined by applying an analytical solution for the smallest intersecting circle problem, in contrast to the gradient optimization approaches used until today. An algorithm is presented and subjected to a performance test using two different types of radiosensitive film, the Kodak EDR2 radiographic film and the ISP EBT2 radiochromic film. Artificial star shots with a priori known radiation isocenter size are used to determine the systematic errors introduced by the digitization of the film and the computer analysis. The estimated uncertainty on the isocenter size measurement with the presented technique was 0.04 mm (2σ) and 0.06 mm (2σ) for radiographic and radiochromic films, respectively. As an application of the technique, a study was conducted to compare the mechanical stability of O-ring gantry systems with C-arm-based gantries. In total ten systems of five different institutions were included in this study and star shots were acquired for gantry, collimator, ring, couch rotations and gantry wobble. It was not possible to draw general conclusions about differences in mechanical performance between O-ring and C-arm gantry systems, mainly due to differences in the beam-MLC alignment procedure accuracy. Nevertheless, the best performing O-ring system in this study, a BrainLab/MHI Vero system
Efficiently modeling neural networks on massively parallel computers
Farber, R.M.
1992-01-01
Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper will describe the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SMM computers and can be implemented on computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors. We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can be extend to arbitrarily large networks by merging the memory space of separate processors with fast adjacent processor inter-processor communications. This paper will consider the simulation of only feed forward neural network although this method is extendible to recurrent networks.
Efficiently modeling neural networks on massively parallel computers
Farber, R.M.
1992-12-01
Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper will describe the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SMM computers and can be implemented on computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors. We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can be extend to arbitrarily large networks by merging the memory space of separate processors with fast adjacent processor inter-processor communications. This paper will consider the simulation of only feed forward neural network although this method is extendible to recurrent networks.
Efficiently modeling neural networks on massively parallel computers
NASA Technical Reports Server (NTRS)
Farber, Robert M.
1993-01-01
Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.
Hatano, Aya; Ueno, Taiji; Kitagami, Shinji; Kawaguchi, Jun
2015-01-01
Verbal overshadowing refers to a phenomenon whereby verbalization of non-verbal stimuli (e.g., facial features) during the maintenance phase (after the target information is no longer available from the sensory inputs) impairs subsequent non-verbal recognition accuracy. Two primary mechanisms have been proposed for verbal overshadowing, namely the recoding interference hypothesis, and the transfer-inappropriate processing shift. The former assumes that verbalization renders non-verbal representations less accurate. In contrast, the latter assumes that verbalization shifts processing operations to a verbal mode and increases the chance of failing to return to non-verbal, face-specific processing operations (i.e., intact, yet inaccessible non-verbal representations). To date, certain psychological phenomena have been advocated as inconsistent with the recoding-interference hypothesis. These include a decline in non-verbal memory performance following verbalization of non-target faces, and occasional failures to detect a significant correlation between the accuracy of verbal descriptions and the non-verbal memory performance. Contrary to these arguments against the recoding interference hypothesis, however, the present computational model instantiated core processing principles of the recoding interference hypothesis to simulate face recognition, and nonetheless successfully reproduced these behavioral phenomena, as well as the standard verbal overshadowing. These results demonstrate the plausibility of the recoding interference hypothesis to account for verbal overshadowing, and suggest there is no need to implement separable mechanisms (e.g., operation-specific representations, different processing principles, etc.). In addition, detailed inspections of the internal processing of the model clarified how verbalization rendered internal representations less accurate and how such representations led to reduced recognition accuracy, thereby offering a computationally
Efficiency and Accuracy of Bernese Periacetabular Osteotomy for Adult Hip Dysplasia
Luo, Dian‐zhong; Xiao, Kai; Cheng, Hui
2015-01-01
Bernese periacetabular osteotomy (PAO) has several advantages dealing with adolescents and adults acetabular dysplasia. The authors introduced the details and steps performing PAO, with attached video and schematic diagram which demonstrates a perfect PAO in efficiency and accuracy. The patient is an 18‐year‐old girl, complaining hip pain on the left side for 6 months. Physical examination shows normal gait and range of motion (ROM) of the left hip. Pelvic anteroposterior X‐ray shows acetabular dysplasia on the left, and post operation on the right. She is very satisfied with the PAO on the right one year before, so we recommend PAO for the left hip dysplasia again. The key point of PAO includes 4 cuts: ischial cut, pubic cut, acetabular roof cut, and quadrilateral bone cut, and the four cuts should be accomplished accurately. Then the acetabular fragment should be turned to ideal position with the lateral CE angle (LCE) > 25°, the Tönnis acetabular angle 0°, the anterior CE angle (ACE) > 20°, good congruence joint space, and with the hip center medialized slightly. At lastly the acetabular fragment is fixed with proper nails and instruments. The patient is very happy to the surgery with no hip pain, with normal gait, ROM, and Harris hip scores (HHS). In summary, PAO is a relative new and efficient procedure for adult hip dysplasia, requiring accurate techniques. Cadaveric practice and familiar with the local anatomy can help the surgeon overcome the learning curve quickly. PMID:26791326
Geng, Wei; Liu, Changying; Su, Yucheng; Li, Jun; Zhou, Yanmin
2015-01-01
Purpose: To evaluate the clinical outcomes of implants placed using different types of computer-aided design/computer-aided manufacturing (CAD/CAM) surgical guides, including partially guided and totally guided templates, and determine the accuracy of these guides Materials and methods: In total, 111 implants were placed in 24 patients using CAD/CAM surgical guides. After implant insertion, the positions and angulations of the placed implants relative to those of the planned ones were determined using special software that matched pre- and postoperative computed tomography (CT) images, and deviations were calculated and compared between the different guides and templates. Results: The mean angular deviations were 1.72 ± 1.67 and 2.71 ± 2.58, the mean deviations in position at the neck were 0.27 ± 0.24 and 0.69 ± 0.66 mm, the mean deviations in position at the apex were 0.37 ± 0.35 and 0.94 ± 0.75 mm, and the mean depth deviations were 0.32 ± 0.32 and 0.51 ± 0.48 mm with tooth- and mucosa-supported stereolithographic guides, respectively (P < .05 for all). The mean distance deviations when partially guided (29 implants) and totally guided templates (30 implants) were used were 0.54 ± 0.50 mm and 0.89 ± 0.78 mm, respectively, at the neck and 1.10 ± 0.85 mm and 0.81 ± 0.64 mm, respectively, at the apex, with corresponding mean angular deviations of 2.56 ± 2.23° and 2.90 ± 3.0° (P > .05 for all). Conclusions: Tooth-supported surgical guides may be more accurate than mucosa-supported guides, while both partially and totally guided templates can simplify surgery and aid in optimal implant placement. PMID:26309497
An efficient method for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1989-01-01
An efficient method of computation of the manipulator inertia matrix is presented. Using spatial notations, the method leads to the definition of the composite rigid-body spatial inertia, which is a spatial representation of the notion of augmented body. The previously proposed methods, the physical interpretations leading to their derivation, and their redundancies are analyzed. The proposed method achieves a greater efficiency by eliminating the redundancy in the intrinsic equations as well as by a better choice of coordinate frame for their projection. In this case, removing the redundancy leads to greater efficiency of the computation in both serial and parallel senses.
The accuracy of computational fluid dynamics analysis of the passive drag of a male swimmer.
Bixler, Barry; Pease, David; Fairhurst, Fiona
2007-01-01
The aim of this study was to build an accurate computer-based model to study the water flow and drag force characteristics around and acting upon the human body while in a submerged streamlined position. Comparisons of total drag force were performed between an actual swimmer, a virtual computational fluid dynamics (CFD) model of the swimmer, and an actual mannequin based on the virtual model. Drag forces were determined for velocities between 1.5 m/s and 2.25 m/s (representative of the velocities demonstrated in elite competition). The drag forces calculated from the virtual model using CFD were found to be within 4% of the experimentally determined values for the mannequin. The mannequin drag was found to be 18% less than the drag of the swimmer at each velocity examined. This study has determined the accuracy of using CFD for the analysis of the hydrodynamics of swimming and has allowed for the improved understanding of the relative contributions of various forms of drag to the total drag force experienced by submerged swimmers.
NASA Astrophysics Data System (ADS)
McGah, Patrick; Levitt, Michael; Barbour, Michael; Mourad, Pierre; Kim, Louis; Aliseda, Alberto
2013-11-01
We study the hemodynamic conditions in patients with cerebral aneurysms through endovascular measurements and computational fluid dynamics. Ten unruptured cerebral aneurysms were clinically assessed by three dimensional rotational angiography and an endovascular guidewire with dual Doppler ultrasound transducer and piezoresistive pressure sensor at multiple peri-aneurysmal locations. These measurements are used to define boundary conditions for flow simulations at and near the aneurysms. The additional in vivo measurements, which were not prescribed in the simulation, are used to assess the accuracy of the simulated flow velocity and pressure. We also performed simulations with stereotypical literature-derived boundary conditions. Simulated velocities using patient-specific boundary conditions showed good agreement with the guidewire measurements, with no systematic bias and a random scatter of about 25%. Simulated velocities using the literature-derived values showed a systematic over-prediction in velocity by 30% with a random scatter of about 40%. Computational hemodynamics using endovascularly-derived patient-specific boundary conditions have the potential to improve treatment predictions as they provide more accurate and precise results of the aneurysmal hemodynamics. Supported by an R03 grant from NIH/NINDS
Iafolla, Marco AJ; Dong, Guang Qiang; McMillen, David R
2008-01-01
Background Simulating the major molecular events inside an Escherichia coli cell can lead to a very large number of reactions that compose its overall behaviour. Not only should the model be accurate, but it is imperative for the experimenter to create an efficient model to obtain the results in a timely fashion. Here, we show that for many parameter regimes, the effect of the host cell genome on the transcription of a gene from a plasmid-borne promoter is negligible, allowing one to simulate the system more efficiently by removing the computational load associated with representing the presence of the rest of the genome. The key parameter is the on-rate of RNAP binding to the promoter (k_on), and we compare the total number of transcripts produced from a plasmid vector generated as a function of this rate constant, for two versions of our gene expression model, one incorporating the host cell genome and one excluding it. By sweeping parameters, we identify the k_on range for which the difference between the genome and no-genome models drops below 5%, over a wide range of doubling times, mRNA degradation rates, plasmid copy numbers, and gene lengths. Results We assess the effect of the simulating the presence of the genome over a four-dimensional parameter space, considering: 24 min <= bacterial doubling time <= 100 min; 10 <= plasmid copy number <= 1000; 2 min <= mRNA half-life <= 14 min; and 10 bp <= gene length <= 10000 bp. A simple MATLAB user interface generates an interpolated k_on threshold for any point in this range; this rate can be compared to the ones used in other transcription studies to assess the need for including the genome. Conclusion Exclusion of the genome is shown to yield less than 5% difference in transcript numbers over wide ranges of values, and computational speed is improved by two to 24 times by excluding explicit representation of the genome. PMID:18789148
Accuracy of Cone Beam Computed Tomography for Detection of Bone Loss
Goodarzi Pour, Daryoush; Soleimani Shayesteh, Yadollah
2015-01-01
Objectives: Bone assessment is essential for diagnosis, treatment planning and prediction of prognosis of periodontal diseases. However, two-dimensional radiographic techniques have multiple limitations, mainly addressed by the introduction of three-dimensional imaging techniques such as cone beam computed tomography (CBCT). This study aimed to assess the accuracy of CBCT for detection of marginal bone loss in patients receiving dental implants. Materials and Methods: A study of diagnostic test accuracy was designed and 38 teeth from candidates for dental implant treatment were selected. On CBCT scans, the amount of bone resorption in the buccal, lingual/palatal, mesial and distal surfaces was determined by measuring the distance from the cementoenamel junction to the alveolar crest (normal group: 0–1.5mm, mild bone loss: 1.6–3mm, moderate bone loss: 3.1–4.5mm and severe bone loss: >4.5mm). During the surgical phase, bone loss was measured at the same sites using a periodontal probe. The values were then compared by McNemar’s test. Results: In the buccal, lingual/palatal, mesial and distal surfaces, no significant difference was observed between the values obtained using CBCT and the surgical method. The correlation between CBCT and surgical method was mainly based on the estimation of the degree of bone resorption. CBCT was capable of showing various levels of resorption in all surfaces with high sensitivity, specificity, positive predictive value and negative predictive value compared to the surgical method. Conclusion: CBCT enables accurate measurement of bone loss comparable to surgical exploration and can be used for diagnosis of bone defects in periodontal diseases in clinical settings. PMID:26877741
Choi, Yoo Jin; Kim, Kyung Su; Suh, Gil Joon; Kwon, Woon Yong
2016-01-01
Objective This study compared the diagnostic accuracy of computed tomography (CT) angiography in patients with various severities of gastrointestinal hemorrhage (GIH). Methods We retrospectively enrolled adult patients (n=262) with GIH who had undergone CT angiography from January 2012 to December 2013. Age, sex, comorbidities, presenting symptoms, initial vital signs, laboratory results, transfusion volume, emergency department disposition, and hospital mortality were abstracted from patient records. CT angiography findings were reviewed and compared to reference standards consisting of endoscopy, conventional angiography, bleeding scan, capsule endoscopy, and surgery, either alone or in combination. Clinical severity was stratified according to the number of packed red blood cell units transfused during the first two days: the first quartile was categorized as mild severity, while the second and third quartiles were categorized as moderate severity. The fourth quartile was categorized as severe. Results Patients were categorized into the mild (n=75, 28.6%), moderate (n=139, 53.1%), and severe (n=48, 18.3%) groups. The mean number of transfused packed red blood cell units was 0, 3, and 9.6 in the mild, moderate, and severe groups, respectively. The overall sensitivity, specificity, positive predictive value, and negative predictive value of CT angiography were 73.8%, 94.0%, 97.3%, and 55.3%, respectively. The area under the receiver operating characteristics curve for the diagnostic performance of CT angiography was 0.780, 0.841, and 0.930 in the mild, moderate, and severe groups, respectively, which significantly differed among groups (P=0.006). Conclusion The diagnostic accuracy of CT angiography is better in patients with more severe GIH. PMID:27752620
Li, Chao-Jui; Syue, Yuan-Jhen; Tsai, Tsung-Cheng; Wu, Kuan-Han; Lee, Chien-Hung; Lin, Yan-Ren
2016-02-01
The ability of emergency physicians (EPs) to continue within the specialty has been called into question due to high stress in emergency departments (EDs).The purpose of this study was to investigate the impact of EP seniority on clinical performance.A retrospective, 1-year cohort study was conducted across 3 EDs in the largest health-care system in Taiwan. Participants included 44,383 adult nontrauma patients who presented to the EDs. Physicians were categorized as junior, intermediate, and senior EPs according to ≤5, 6 to 10, and >10 years of ED work experience. The door-to-order and door-to-disposition time were used to evaluate EP efficiency. Emergency department resource use indicators included diagnostic investigations of electrocardiography, plain film radiography, laboratory tests, and computed tomography scans. Discharge and mortality rates were used as patient outcomes. Disposition accuracy was evaluated by ED revisit rate.Senior EPs were found to have longer door-to-order (11.3, 12.4 minutes) and door-to-disposition (2, 1.7 hours) time than nonsenior EPs in urgent and nonurgent patients (junior: 9.4, 10.2 minutes and 1.7, 1.5 hours; intermediate: 9.5, 10.7 minutes and 1.7, 1.5 hours). Senior EPs tended to order fewer electrocardiograms, radiographs, and computed tomography scans in nonurgent patients. Adjusting for age, sex, disease acuity, and medical setting, patients treated by junior and intermediate EPs had higher mortality in the ED (adjusted odd ratios, 1.5 and 1.6, respectively).Compared with EPs with ≤10 years of work experience, senior EPs take more time for order prescription and patient disposition, use fewer diagnostic investigations, particularly for nonurgent patients, and are associated with a lower ED mortality rate.
Balancing accuracy, robustness, and efficiency in simulations of coupled magma/mantle dynamics
NASA Astrophysics Data System (ADS)
Katz, R. F.
2011-12-01
Magmatism plays a central role in many Earth-science problems, and is particularly important for the chemical evolution of the mantle. The standard theory for coupled magma/mantle dynamics is fundamentally multi-physical, comprising mass and force balance for two phases, plus conservation of energy and composition in a two-component (minimum) thermochemical system. The tight coupling of these various aspects of the physics makes obtaining numerical solutions a significant challenge. Previous authors have advanced by making drastic simplifications, but these have limited applicability. Here I discuss progress, enabled by advanced numerical software libraries, in obtaining numerical solutions to the full system of governing equations. The goals in developing the code are as usual: accuracy of solutions, robustness of the simulation to non-linearities, and efficiency of code execution. I use the cutting-edge example of magma genesis and migration in a heterogeneous mantle to elucidate these issues. I describe the approximations employed and their consequences, as a means to frame the question of where and how to make improvements. I conclude that the capabilities needed to advance multi-physics simulation are, in part, distinct from those of problems with weaker coupling, or fewer coupled equations. Chief among these distinct requirements is the need to dynamically adjust the solution algorithm to maintain robustness in the face of coupled nonlinearities that would otherwise inhibit convergence. This may mean introducing Picard iteration rather than full coupling, switching between semi-implicit and explicit time-stepping, or adaptively increasing the strength of preconditioners. All of these can be accomplished by the user with, for example, PETSc. Formalising this adaptivity should be a goal for future development of software packages that seek to enable multi-physics simulation.
Probabilistic structural analysis algorithm development for computational efficiency
NASA Technical Reports Server (NTRS)
Wu, Y.-T.
1991-01-01
The PSAM (Probabilistic Structural Analysis Methods) program is developing a probabilistic structural risk assessment capability for the SSME components. An advanced probabilistic structural analysis software system, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), is being developed as part of the PSAM effort to accurately simulate stochastic structures operating under severe random loading conditions. One of the challenges in developing the NESSUS system is the development of the probabilistic algorithms that provide both efficiency and accuracy. The main probability algorithms developed and implemented in the NESSUS system are efficient, but approximate in nature. In the last six years, the algorithms have improved very significantly.
Li, Dongsheng; Sun, Xin; Khaleel, Mohammad A.
2011-09-28
This study evaluated different upscaling methods to predict thermal conductivity in loaded nuclear waste form, a heterogeneous material system. The efficiency and accuracy of these methods were compared. Thermal conductivity in loaded nuclear waste form is an important property specific to scientific researchers, in waste form Integrated performance and safety code (IPSC). The effective thermal conductivity obtained from microstructure information and local thermal conductivity of different components is critical in predicting the life and performance of waste form during storage. How the heat generated during storage is directly related to thermal conductivity, which in turn determining the mechanical deformation behavior, corrosion resistance and aging performance. Several methods, including the Taylor model, Sachs model, self-consistent model, and statistical upscaling models were developed and implemented. Due to the absence of experimental data, prediction results from finite element method (FEM) were used as reference to determine the accuracy of different upscaling models. Micrographs from different loading of nuclear waste were used in the prediction of thermal conductivity. Prediction results demonstrated that in term of efficiency, boundary models (Taylor and Sachs model) are better than self consistent model, statistical upscaling method and FEM. Balancing the computation resource and accuracy, statistical upscaling is a computational efficient method in predicting effective thermal conductivity for nuclear waste form.
[Accuracy and precision in the evaluation of computer assisted surgical systems. A definition].
Strauss, G; Hofer, M; Korb, W; Trantakis, C; Winkler, D; Burgert, O; Schulz, T; Dietz, A; Meixensberger, J; Koulechov, K
2006-02-01
Accuracy represents the outstanding criterion for navigation systems. Surgeons have noticed a great discrepancy between the values from the literature and system specifications on one hand, and intraoperative accuracy on the other. A unitary understanding for the term accuracy does not exist in clinical practice. Furthermore, an incorrect equality for the terms precision and accuracy can be found in the literature. On top of this, clinical accuracy differs from mechanical (technical) accuracy. From a clinical point of view, we had to deal with remarkably many different terms all describing accuracy. This study has the goals of: 1. Defining "accuracy" and related terms, 2. Differentiating between "precision" and "accuracy", 3. Deriving the term "surgical accuracy", 4. Recommending use of the the term "surgical accuracy" for a navigation system. To a great extent, definitions were applied from the International Standardisation Organisation-ISO and the norm from the Deutsches Institut für Normung e.V.-DIN (the German Institute for Standardization). For defining surgical accuracy, the terms reference value, expectation, accuracy and precision are of major interest. Surgical accuracy should indicate the maximum values for the deviation between test results and the reference value (true value) A(max), and additionally indicate precision P(surg). As a basis for measurements, a standardized technical model was used. Coordinates of the model were acquired by CT. To determine statistically and reality relevant results for head surgery, 50 measurements with an accuracy of 50, 75, 100 and 150 mm from the centre of the registration geometry are adequate. In the future, we recommend labeling the system's overall performance with the following specifications: maximum accuracy deviation A(max), precision P and information on the measurement method. This could be displayed on a seal of quality.
Positive Wigner Functions Render Classical Simulation of Quantum Computation Efficient
NASA Astrophysics Data System (ADS)
Mari, A.; Eisert, J.
2012-12-01
We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.
A scheme for efficient quantum computation with linear optics
NASA Astrophysics Data System (ADS)
Knill, E.; Laflamme, R.; Milburn, G. J.
2001-01-01
Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.
Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1998-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.
I/O-Efficient Scientific Computation Using TPIE
NASA Technical Reports Server (NTRS)
Vengroff, Darren Erik; Vitter, Jeffrey Scott
1996-01-01
In recent years, input/output (I/O)-efficient algorithms for a wide variety of problems have appeared in the literature. However, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to support I/O-efficient paradigms for problems from a variety of domains, including computational geometry, graph algorithms, and scientific computation. The TPIE interface frees programmers from having to deal not only with explicit read and write calls, but also the complex memory management that must be performed for I/O-efficient computation. In this paper we discuss applications of TPIE to problems in scientific computation. We discuss algorithmic issues underlying the design and implementation of the relevant components of TPIE and present performance results of programs written to solve a series of benchmark problems using our current TPIE prototype. Some of the benchmarks we present are based on the NAS parallel benchmarks while others are of our own creation. We demonstrate that the central processing unit (CPU) overhead required to manage I/O is small and that even with just a single disk, the I/O overhead of I/O-efficient computation ranges from negligible to the same order of magnitude as CPU time. We conjecture that if we use a number of disks in parallel this overhead can be all but eliminated.
Madani, Zahrasadat; Moudi, Ehsan; Bijani, Ali; Mahmoudi, Elham
2016-01-01
Introduction: The aim of this study was to compare the diagnostic value of cone-beam computed tomography (CBCT) and periapical (PA) radiography in detecting internal root resorption. Methods and Materials: Eighty single rooted human teeth with visible pulps in PA radiography were split mesiodistally along the coronal plane. Internal resorption like lesions were created in three areas (cervical, middle and apical) in labial wall of the canals in different diameters. PA radiography and CBCT images were taken from each tooth. Two observers examined the radiographs and CBCT images to evaluate the presence of resorption cavities. The data were statistically analyzed and degree of agreement was calculated using Cohen’s kappa (k) values. Results: The mean±SD of agreement coefficient of kappa between the two observers of the CBCT images was calculated to be 0.681±0.047. The coefficients for the direct, mesial and distal PA radiography were 0.405±0.059, 0.421±0.060 and 0.432±0.056, respectively (P=0.001). The differences in the diagnostic accuracy of resorption of different sizes were statistically significant (P<0.05); however, the PA radiography and CBCT, had no statistically significant differences in detection of internal resorption lesions in the cervical, middle and apical regions. Conclusion: Though, CBCT has a higher sensitivity, specificity, positive predictive value and negative predictive value in comparison with conventional radiography, this difference was not significant. PMID:26843878
Geha, Hassem; Sankar, Vidya; Teixeira, Fabricio B.; McMahan, Clyde Alex; Noujeim, Marcel
2015-01-01
Purpose The purpose of this study was to evaluate and compare the efficacy of cone-beam computed tomography (CBCT) and digital intraoral radiography in diagnosing simulated small external root resorption cavities. Materials and Methods Cavities were drilled in 159 roots using a small spherical bur at different root levels and on all surfaces. The teeth were imaged both with intraoral digital radiography using image plates and with CBCT. Two sets of intraoral images were acquired per tooth: orthogonal (PA) which was the conventional periapical radiograph and mesioangulated (SET). Four readers were asked to rate their confidence level in detecting and locating the lesions. Receiver operating characteristic (ROC) analysis was performed to assess the accuracy of each modality in detecting the presence of lesions, the affected surface, and the affected level. Analysis of variation was used to compare the results and kappa analysis was used to evaluate interobserver agreement. Results A significant difference in the area under the ROC curves was found among the three modalities (P=0.0002), with CBCT (0.81) having a significantly higher value than PA (0.71) or SET (0.71). PA was slightly more accurate than SET, but the difference was not statistically significant. CBCT was also superior in locating the affected surface and level. Conclusion CBCT has already proven its superiority in detecting multiple dental conditions, and this study shows it to likewise be superior in detecting and locating incipient external root resorption. PMID:26389057
Banodkar, Akshaya Bhupesh; Gaikwad, Rajesh Prabhakar; Gunjikar, Tanay Udayrao; Lobo, Tanya Arthur
2015-01-01
Aims: The aim of the present study was to evaluate the accuracy of Cone Beam Computed Tomography (CBCT) measurements of alveolar bone defects caused due to periodontal disease, by comparing it with actual surgical measurements which is the gold standard. Materials and Methods: Hundred periodontal bone defects in fifteen patients suffering from periodontitis and scheduled for flap surgery were included in the study. On the day of surgery prior to anesthesia, CBCT of the quadrant to be operated was taken. After reflection of the flap, clinical measurements of periodontal defect were made using a reamer and digital vernier caliper. The measurements taken during surgery were then compared to the measurements done with CBCT and subjected to statistical analysis using the Pearson's correlation test. Results: Overall there was a very high correlation of 0.988 between the surgical and CBCT measurements. In case of type of defects the correlation was higher in horizontal defects as compared to vertical defects. Conclusions: CBCT is highly accurate in measurement of periodontal defects and proves to be a very useful tool in periodontal diagnosis and treatment assessment. PMID:26229268
Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny
2016-01-01
Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194
Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny
2016-01-01
Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194
Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny
2016-01-01
Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong.
ERIC Educational Resources Information Center
White, Aubrey Randall; Carney, Edward; Reichle, Joe
2010-01-01
Purpose: The current investigation compared directed scanning and group-item scanning among typically developing 4-year-old children. Of specific interest were their accuracy, selection speed, and efficiency of cursor movement in selecting colored line drawn symbols representing object vocabulary. Method: Twelve 4-year-olds made selections in both…
Efficient Turing-Universal Computation with DNA Polymers
NASA Astrophysics Data System (ADS)
Qian, Lulu; Soloveichik, David; Winfree, Erik
Bennett's proposed chemical Turing machine is one of the most important thought experiments in the study of the thermodynamics of computation. Yet the sophistication of molecular engineering required to physically construct Bennett's hypothetical polymer substrate and enzymes has deterred experimental implementations. Here we propose a chemical implementation of stack machines - a Turing-universal model of computation similar to Turing machines - using DNA strand displacement cascades as the underlying chemical primitive. More specifically, the mechanism described herein is the addition and removal of monomers from the end of a DNA polymer, controlled by strand displacement logic. We capture the motivating feature of Bennett's scheme: that physical reversibility corresponds to logically reversible computation, and arbitrarily little energy per computation step is required. Further, as a method of embedding logic control into chemical and biological systems, polymer-based chemical computation is significantly more efficient than geometry-free chemical reaction networks.
Popescu-Rohrlich correlations imply efficient instantaneous nonlocal quantum computation
NASA Astrophysics Data System (ADS)
Broadbent, Anne
2016-08-01
In instantaneous nonlocal quantum computation, two parties cooperate in order to perform a quantum computation on their joint inputs, while being restricted to a single round of simultaneous communication. Previous results showed that instantaneous nonlocal quantum computation is possible, at the cost of an exponential amount of prior shared entanglement (in the size of the input). Here, we show that a linear amount of entanglement suffices, (in the size of the computation), as long as the parties share nonlocal correlations as given by the Popescu-Rohrlich box. This means that communication is not required for efficient instantaneous nonlocal quantum computation. Exploiting the well-known relation to position-based cryptography, our result also implies the impossibility of secure position-based cryptography against adversaries with nonsignaling correlations. Furthermore, our construction establishes a quantum analog of the classical communication complexity collapse under nonsignaling correlations.
NASA Technical Reports Server (NTRS)
Walston, W. H., Jr.
1986-01-01
The comparative computational efficiencies of the finite element (FEM), boundary element (BEM), and hybrid boundary element-finite element (HVFEM) analysis techniques are evaluated for representative bounded domain interior and unbounded domain exterior problems in elastostatics. Computational efficiency is carefully defined in this study as the computer time required to attain a specified level of solution accuracy. The study found the FEM superior to the BEM for the interior problem, while the reverse was true for the exterior problem. The hybrid analysis technique was found to be comparable or superior to both the FEM and BEM for both the interior and exterior problems.
A Computationally Efficient Multicomponent Equilibrium Solver for Aerosols (MESA)
Zaveri, Rahul A.; Easter, Richard C.; Peters, Len K.
2005-12-23
deliquescence points as well as mass growth factors for the sulfate-rich systems. The MESA-MTEM configuration required only 5 to 10 single-level iterations to obtain the equilibrium solution for ~44% of the 328 multiphase problems solved in the 16 test cases at RH values ranging between 20% and 90%, while ~85% of the problems solved required less than 20 iterations. Based on the accuracy and computational efficiency considerations, the MESA-MTEM configuration is attractive for use in 3-D aerosol/air quality models.
An overview of energy efficiency techniques in cluster computing systems
Valentini, Giorgio Luigi; Lassonde, Walter; Khan, Samee Ullah; Min-Allah, Nasro; Madani, Sajjad A.; Li, Juan; Zhang, Limin; Wang, Lizhe; Ghani, Nasir; Kolodziej, Joanna; Li, Hongxiang; Zomaya, Albert Y.; Xu, Cheng-Zhong; Balaji, Pavan; Vishnu, Abhinav; Pinel, Fredric; Pecero, Johnatan E.; Kliazovich, Dzmitry; Bouvry, Pascal
2011-09-10
Two major constraints demand more consideration for energy efficiency in cluster computing: (a) operational costs, and (b) system reliability. Increasing energy efficiency in cluster systems will reduce energy consumption, excess heat, lower operational costs, and improve system reliability. Based on the energy-power relationship, and the fact that energy consumption can be reduced with strategic power management, we focus in this survey on the characteristic of two main power management technologies: (a) static power management (SPM) systems that utilize low-power components to save the energy, and (b) dynamic power management (DPM) systems that utilize software and power-scalable components to optimize the energy consumption. We present the current state of the art in both of the SPM and DPM techniques, citing representative examples. The survey is concluded with a brief discussion and some assumptions about the possible future directions that could be explored to improve the energy efficiency in cluster computing.
Computationally Efficient Clustering of Audio-Visual Meeting Data
NASA Astrophysics Data System (ADS)
Hung, Hayley; Friedland, Gerald; Yeo, Chuohao
This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.
Fast and Computationally Efficient Boundary Detection Technique for Medical Images
NASA Astrophysics Data System (ADS)
Das, Arpita; Goswami, Partha; Sen, Susanta
2011-03-01
Detection of edge is a fundamental procedure of image processing. Many edge detection algorithms have been developed based on computation of the intensity gradient. In medical images, boundaries of the objects are vague for gradual change of intensities. Therefore need exists to develop a computationally efficient and accurate edge detection approach. We have presented such algorithm using modified global threshold technique. In our work, the boundaries are highlighted from the background by selecting a threshold (T) that separates object and background. In the image, where object to background or vice-verse transition occurs, pixel intensity either rises greater or equal to T (background to object transition) or falls less than T (object to background). We have marked these transition regions as object boundary and enhanced the corresponding intensity. The value of T may be specified heuristically or by following specific algorithm. Conventional global threshold algorithm computes the value of T automatically. But this approach is not computationally efficient and required a large memory. In this study, we have proposed a parameter for which computation of T is very easy and fast. We have also proved that a fixed size memory [ 256 × 4 Byte] is enough to compute this algorithm.
Kotlarchyk, M; Chen, S H; Asano, S
1979-07-15
The quasi-elastic light scattering has become an established technique for a rapid and quantitative characterization of an average motility pattern of motile bacteria in suspensions. Essentially all interpretations of the measured light scattering intensities and spectra so far are based on the Rayleigh-Gans-Debye (RGD) approximation. Since the range of sizes of bacteria of interest is generally larger than the wavelength of light used in the measurement, one is not certain of the justification for the use of the RGD approximation. In this paper we formulate a method by which both the scattering intensity and the quasi-elastic light scattering spectra can be calculated from a rigorous scattering theory. For a specific application we study the case of bacteria Escherichia coli (about 1 microm in size) by using numerical solutions of the scattering field amplitudes from a prolate spheroid, which is known to simulate optical properties of the bacteria well. We have computed (1) polarized scattered light intensity vs scattering angle for a randomly oriented bacteria population; (2) polarized scattered field correlation functions for both a freely diffusing bacterium and for a bacterium undergoing a straight line motion in random directions and with a Maxwellian speed distribution; and (3) the corresponding depolarized scattered intensity and field correlation functions. In each case sensitivity of the result to variations of the index of refraction and size of the bacterium is investigated. The conclusion is that within a reasonable range of parameters applicable to E. coli, the accuracy of the RGD is good to within 10% at all angles for the properties (1) and (2), and the depolarized contributions in (3) are generally very small. PMID:20212685
Meertens, R; Brealey, S; Nightingale, J; McCoubrie, P
2013-04-01
Computed tomography colonography (CTC) is the primary radiological test for the detection of colorectal tumours and precancerous polyps. Radiographer reporting of CTC examinations could help to improve the provision of this expanding service. We undertook a systematic review to assess the accuracy with which radiographers can provide formal written reports on intraluminal disease entities of CTC examinations compared to a reference standard. Data sources searched included online databases, peer-reviewed journals, grey literature, and reference and citation tracking. Eligible studies were assessed for bias, and data were extracted on study characteristics. Pooled estimates of sensitivities and specificities and chi-square tests of heterogeneity were calculated. Eight studies were eligible for inclusion with some risk to bias. Pooled estimates from three studies showed per patient sensitivity and specificity of reporting radiographers was 76% (95% CI: 70-80%) and 74% (95% CI: (67-80%), respectively. From seven studies, per lesion sensitivity for the detection of lesions >5 and >10 mm was 68% (95% CI: 65-71%) and 75% (95% CI: 72-79%) respectively. Pooled sensitivity for detection of lesions >5 mm in studies for which radiographers reported 50 or less training cases was 57% (95% CI: 52-61%) and more than 50 cases was 78% (95% CI: 74-81%). The current evidence does not support radiographers in a role involving the single formal written reporting of CTC examinations. Radiographers' performance, however, did appear to improve significantly with the number read. Therefore, when provided with adequate training and experience, there may be a potential role for radiographers in the reporting of CTC examinations. PMID:23312673
Ying, Michael; Cheng, Sammy C H; Ahuja, Anil T
2016-08-01
Ultrasound is useful in assessing cervical lymphadenopathy. Advancement of computer science technology allows accurate and reliable assessment of medical images. The aim of the study described here was to evaluate the diagnostic accuracy of computer-aided assessment of the intranodal vascularity index (VI) in differentiating the various common causes of cervical lymphadenopathy. Power Doppler sonograms of 347 patients (155 with metastasis, 23 with lymphoma, 44 with tuberculous lymphadenitis, 125 reactive) with palpable cervical lymph nodes were reviewed. Ultrasound images of cervical nodes were evaluated, and the intranodal VI was quantified using a customized computer program. The diagnostic accuracy of using the intranodal VI to distinguish different disease groups was evaluated and compared. Metastatic and lymphomatous lymph nodes tend to be more vascular than tuberculous and reactive lymph nodes. The intranodal VI had the highest diagnostic accuracy in distinguishing metastatic and tuberculous nodes with a sensitivity of 80%, specificity of 73%, positive predictive value of 91%, negative predictive value of 51% and overall accuracy of 68% when a cutoff VI of 22% was used. Computer-aided assessment provides an objective and quantitative way to evaluate intranodal vascularity. The intranodal VI is a useful parameter in distinguishing certain causes of cervical lymphadenopathy and is particularly useful in differentiating metastatic and tuberculous lymph nodes. However, it has limited value in distinguishing lymphomatous nodes from metastatic and reactive nodes.
Banks, H Thomas; Hu, Shuhua; Joyner, Michele; Broido, Anna; Canter, Brandi; Gayvert, Kaitlyn; Link, Kathryn
2012-07-01
In this paper, we investigate three particular algorithms: a stochastic simulation algorithm (SSA), and explicit and implicit tau-leaping algorithms. To compare these methods, we used them to analyze two infection models: a Vancomycin-resistant enterococcus (VRE) infection model at the population level, and a Human Immunodeficiency Virus (HIV) within host infection model. While the first has a low species count and few transitions, the second is more complex with a comparable number of species involved. The relative efficiency of each algorithm is determined based on computational time and degree of precision required. The numerical results suggest that all three algorithms have the similar computational efficiency for the simpler VRE model, and the SSA is the best choice due to its simplicity and accuracy. In addition, we have found that with the larger and more complex HIV model, implementation and modification of tau-Leaping methods are preferred.
An efficient method for computing the QTAIM topology of a scalar field: the electron density case.
Rodríguez, Juan I
2013-03-30
An efficient method for computing the quantum theory of atoms in molecules (QTAIM) topology of the electron density (or other scalar field) is presented. A modified Newton-Raphson algorithm was implemented for finding the critical points (CP) of the electron density. Bond paths were constructed with the second-order Runge-Kutta method. Vectorization of the present algorithm makes it to scale linearly with the system size. The parallel efficiency decreases with the number of processors (from 70% to 50%) with an average of 54%. The accuracy and performance of the method are demonstrated by computing the QTAIM topology of the electron density of a series of representative molecules. Our results show that our algorithm might allow to apply QTAIM analysis to large systems (carbon nanotubes, polymers, fullerenes) considered unreachable until now.
Evaluating Behavioral Self-Monitoring with Accuracy Training for Changing Computer Work Postures
ERIC Educational Resources Information Center
Gravina, Nicole E.; Loewy, Shannon; Rice, Anna; Austin, John
2013-01-01
The primary purpose of this study was to replicate and extend a study by Gravina, Austin, Schroedter, and Loewy (2008). A similar self-monitoring procedure, with the addition of self-monitoring accuracy training, was implemented to increase the percentage of observations in which participants worked in neutral postures. The accuracy training…
A Comparison of the Efficiency and Accuracy of BILOG and LOGIST.
ERIC Educational Resources Information Center
Yen, Wendy M.
1987-01-01
Comparisons are made between BILOG version 2.2 and LOGIST 5.0 version 2.5 in estimating the item parameters, traits, item characteristic functions, and test characteristic functions for the three-parameter logistic model. Speed and accuracy are reported for a number of 10, 20, and 40-item tests. (Author/GDC)
NASA Astrophysics Data System (ADS)
Summers, Jason E.; Takahashi, Kengo; Shimizu, Yasushi; Yamakawa, Takashi
2001-05-01
When based on geometrical acoustics, computational models used for auralization of auditorium sound fields are physically inaccurate at low frequencies. To increase accuracy while keeping computation tractable, hybrid methods using computational wave acoustics at low frequencies have been proposed and implemented in small enclosures such as simplified models of car cabins [Granier et al., J. Audio Eng. Soc. 44, 835-849 (1996)]. The present work extends such an approach to an actual 2400-m3 auditorium using the boundary-element method for frequencies below 100 Hz. The effect of including wave-acoustics at low frequencies is assessed by comparing the predictions of the hybrid model with those of the geometrical-acoustics model and comparing both with measurements. Conventional room-acoustical metrics are used together with new methods based on two-dimensional distance measures applied to time-frequency representations of impulse responses. Despite in situ measurements of boundary impedance, uncertainties in input parameters limit the accuracy of the computed results at low frequencies. However, aural perception ultimately defines the required accuracy of computational models. An algorithmic method for making such evaluations is proposed based on correlating listening-test results with distance measures between time-frequency representations derived from auditory models of the ear-brain system. Preliminary results are presented.
Weyand, Sabine; Chau, Tom
2015-01-01
Brain–computer interfaces (BCIs) provide individuals with a means of interacting with a computer using only neural activity. To date, the majority of near-infrared spectroscopy (NIRS) BCIs have used prescribed tasks to achieve binary control. The goals of this study were to evaluate the possibility of using a personalized approach to establish control of a two-, three-, four-, and five-class NIRS–BCI, and to explore how various user characteristics correlate to accuracy. Ten able-bodied participants were recruited for five data collection sessions. Participants performed six mental tasks and a personalized approach was used to select each individual’s best discriminating subset of tasks. The average offline cross-validation accuracies achieved were 78, 61, 47, and 37% for the two-, three-, four-, and five-class problems, respectively. Most notably, all participants exceeded an accuracy of 70% for the two-class problem, and two participants exceeded an accuracy of 70% for the three-class problem. Additionally, accuracy was found to be strongly positively correlated (Pearson’s) with perceived ease of session (ρ = 0.653), ease of concentration (ρ = 0.634), and enjoyment (ρ = 0.550), but strongly negatively correlated with verbal IQ (ρ = −0.749). PMID:26483657
DEM generation from digital photographs using computer vision: Accuracy and application
NASA Astrophysics Data System (ADS)
James, M. R.; Robson, S.
2012-12-01
Data for detailed digital elevation models (DEMs) are usually collected by expensive laser-based techniques, or by photogrammetric methods that require expertise and specialist software. However, recent advances in computer vision research now permit 3D models to be automatically derived from unordered collections of photographs, and offer the potential for significantly cheaper and quicker DEM production. Here, we review the advantages and limitations of this approach and, using imagery of the summit craters of Piton de la Fournaise, compare the precisions obtained with those from formal close range photogrammetry. The surface reconstruction process is based on a combination of structure-from-motion and multi-view stereo algorithms (SfM-MVS). Using multiple photographs of a scene taken from different positions with a consumer-grade camera, dense point clouds (millions of points) can be derived. Processing is carried out by automated 'reconstruction pipeline' software downloadable from the internet. Unlike traditional photogrammetric approaches, the initial reconstruction process does not require the identification of any control points or initial camera calibration and is carried out with little or no operator intervention. However, such reconstructions are initially un-scaled and un-oriented so additional software has been developed to permit georeferencing. Although this step requires the presence of some control points or features within the scene, it does not have the relatively strict image acquisition and control requirements of traditional photogrammetry. For accuracy, and to allow error analysis, georeferencing observations are made within the image set, rather than requiring feature matching within the point cloud. Application of SfM-MVS is demonstrated using images taken from a microlight aircraft over the summit of Piton de la Fournaise volcano (courtesy of B. van Wyk de Vries). 133 images, collected with a Canon EOS D60 and 20 mm fixed focus lens, were
Efficient quantum circuits for one-way quantum computing.
Tanamoto, Tetsufumi; Liu, Yu-Xi; Hu, Xuedong; Nori, Franco
2009-03-13
While Ising-type interactions are ideal for implementing controlled phase flip gates in one-way quantum computing, natural interactions between solid-state qubits are most often described by either the XY or the Heisenberg models. We show an efficient way of generating cluster states directly using either the imaginary SWAP (iSWAP) gate for the XY model, or the sqrt[SWAP] gate for the Heisenberg model. Our approach thus makes one-way quantum computing more feasible for solid-state devices.
The efficient computation of Fourier transforms on the symmetric group
NASA Astrophysics Data System (ADS)
Maslen, D. K.
1998-07-01
This paper introduces new techniques for the efficient computation of Fourier transforms on symmetric groups and their homogeneous spaces. We replace the matrix multiplications in Clausen's algorithm with sums indexed by combinatorial objects that generalize Young tableaux, and write the result in a form similar to Horner's rule. The algorithm we obtain computes the Fourier transform of a function on S-n in no more than 3/4n(n - 1)S-n multiplications and the same number of additions. Analysis of our algorithm leads to several combinatorial problems that generalize path counting. We prove corresponding results for inverse transforms and transforms on homogeneous spaces.
Darvishi, Sam; Ridding, Michael C; Abbott, Derek; Baumert, Mathias
2013-01-01
Recently, the application of restorative brain-computer interfaces (BCIs) has received significant interest in many BCI labs. However, there are a number of challenges, that need to be tackled to achieve efficient performance of such systems. For instance, any restorative BCI needs an optimum trade-off between time window length, classification accuracy and classifier update rate. In this study, we have investigated possible solutions to these problems by using a dataset provided by the University of Graz, Austria. We have used a continuous wavelet transform and the Student t-test for feature extraction and a support vector machine (SVM) for classification. We find that improved results, for restorative BCIs for rehabilitation, may be achieved by using a 750 milliseconds time window with an average classification accuracy of 67% that updates every 32 milliseconds.
A compute-Efficient Bitmap Compression Index for Database Applications
Wu, Kesheng; Shoshani, Arie
2006-01-01
FastBit: A Compute-Efficient Bitmap Compression Index for Database Applications The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is highly efficient for performing search and retrieval operations on large datasets. The WAH technique is optimized for computational efficiency. The WAH-based bitmap indexing software, called FastBit, is particularly appropriate to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry. Some commercial database products already include some Version of a bitmap index, which could possibly be replaced by the WAR bitmap compression techniques for potentially large operational speedup. Experimental results show performance improvements by an average factor of 10 over bitmap technology used by industry, as well as increased efficiencies in constructing compressed bitmaps. FastBit can be use as a stand-alone index, or integrated into a database system. ien integrated into a database system, this technique may be particularly useful for real-time business analysis applications. Additional FastRit applications may include efficient real-time exploration of scientific models, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization. FastBit was proven theoretically to be time-optimal because it provides a search time proportional to the number of elements selected by the index.
A compute-Efficient Bitmap Compression Index for Database Applications
2006-01-01
FastBit: A Compute-Efficient Bitmap Compression Index for Database Applications The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is highly efficient for performing search and retrieval operations on large datasets. The WAH technique is optimized for computational efficiency. The WAH-based bitmap indexing software, called FastBit, is particularly appropriate to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry. Some commercial database products already include some Version of a bitmap index,more » which could possibly be replaced by the WAR bitmap compression techniques for potentially large operational speedup. Experimental results show performance improvements by an average factor of 10 over bitmap technology used by industry, as well as increased efficiencies in constructing compressed bitmaps. FastBit can be use as a stand-alone index, or integrated into a database system. ien integrated into a database system, this technique may be particularly useful for real-time business analysis applications. Additional FastRit applications may include efficient real-time exploration of scientific models, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization. FastBit was proven theoretically to be time-optimal because it provides a search time proportional to the number of elements selected by the index.« less
NASA Astrophysics Data System (ADS)
Camacho, Miguel; Boix, Rafael R.; Medina, Francisco
2016-06-01
The authors present a computationally efficient technique for the analysis of extraordinary transmission through both infinite and truncated periodic arrays of slots in perfect conductor screens of negligible thickness. An integral equation is obtained for the tangential electric field in the slots both in the infinite case and in the truncated case. The unknown functions are expressed as linear combinations of known basis functions, and the unknown weight coefficients are determined by means of Galerkin's method. The coefficients of Galerkin's matrix are obtained in the spatial domain in terms of double finite integrals containing the Green's functions (which, in the infinite case, is efficiently computed by means of Ewald's method) times cross-correlations between both the basis functions and their divergences. The computation in the spatial domain is an efficient alternative to the direct computation in the spectral domain since this latter approach involves the determination of either slowly convergent double infinite summations (infinite case) or slowly convergent double infinite integrals (truncated case). The results obtained are validated by means of commercial software, and it is found that the integral equation technique presented in this paper is at least two orders of magnitude faster than commercial software for a similar accuracy. It is also shown that the phenomena related to periodicity such as extraordinary transmission and Wood's anomaly start to appear in the truncated case for arrays with more than 100 (10 ×10 ) slots.
Tucker, Jonathan R.; Shadle, Lawrence J.; Benyahia, Sofiane; Mei, Joseph; Guenther, Chris; Koepke, M. E.
2013-01-01
Useful prediction of the kinematics, dynamics, and chemistry of a system relies on precision and accuracy in the quantification of component properties, operating mechanisms, and collected data. In an attempt to emphasize, rather than gloss over, the benefit of proper characterization to fundamental investigations of multiphase systems incorporating solid particles, a set of procedures were developed and implemented for the purpose of providing a revised methodology having the desirable attributes of reduced uncertainty, expanded relevance and detail, and higher throughput. Better, faster, cheaper characterization of multiphase systems result. Methodologies are presented to characterize particle size, shape, size distribution, density (particle, skeletal and bulk), minimum fluidization velocity, void fraction, particle porosity, and assignment within the Geldart Classification. A novel form of the Ergun equation was used to determine the bulk void fractions and particle density. Accuracy of properties-characterization methodology was validated on materials of known properties prior to testing materials of unknown properties. Several of the standard present-day techniques were scrutinized and improved upon where appropriate. Validity, accuracy, and repeatability were assessed for the procedures presented and deemed higher than present-day techniques. A database of over seventy materials has been developed to assist in model validation efforts and future desig
Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr.; Giunta, Anthony Andrew
2006-01-01
Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and
Parallel-META: efficient metagenomic data analysis based on high-performance computation
2012-01-01
Background Metagenomics method directly sequences and analyses genome information from microbial communities. There are usually more than hundreds of genomes from different microbial species in the same community, and the main computational tasks for metagenomic data analyses include taxonomical and functional component examination of all genomes in the microbial community. Metagenomic data analysis is both data- and computation- intensive, which requires extensive computational power. Most of the current metagenomic data analysis softwares were designed to be used on a single computer or single computer clusters, which could not match with the fast increasing number of large metagenomic projects' computational requirements. Therefore, advanced computational methods and pipelines have to be developed to cope with such need for efficient analyses. Result In this paper, we proposed Parallel-META, a GPU- and multi-core-CPU-based open-source pipeline for metagenomic data analysis, which enabled the efficient and parallel analysis of multiple metagenomic datasets and the visualization of the results for multiple samples. In Parallel-META, the similarity-based database search was parallelized based on GPU computing and multi-core CPU computing optimization. Experiments have shown that Parallel-META has at least 15 times speed-up compared to traditional metagenomic data analysis method, with the same accuracy of the results http://www.computationalbioenergy.org/parallel-meta.html. Conclusion The parallel processing of current metagenomic data would be very promising: with current speed up of 15 times and above, binning would not be a very time-consuming process any more. Therefore, some deeper analysis of the metagenomic data, such as the comparison of different samples, would be feasible in the pipeline, and some of these functionalities have been included into the Parallel-META pipeline. PMID:23046922
Tempest - Efficient Computation of Atmospheric Flows Using High-Order Local Discretization Methods
NASA Astrophysics Data System (ADS)
Ullrich, P. A.; Guerra, J. E.
2014-12-01
The Tempest Framework composes several compact numerical methods to easily facilitate intercomparison of atmospheric flow calculations on the sphere and in rectangular domains. This framework includes the implementations of Spectral Elements, Discontinuous Galerkin, Flux Reconstruction, and Hybrid Finite Element methods with the goal of achieving optimal accuracy in the solution of atmospheric problems. Several advantages of this approach are discussed such as: improved pressure gradient calculation, numerical stability by vertical/horizontal splitting, arbitrary order of accuracy, etc. The local numerical discretization allows for high performance parallel computation and efficient inclusion of parameterizations. These techniques are used in conjunction with a non-conformal, locally refined, cubed-sphere grid for global simulations and standard Cartesian grids for simulations at the mesoscale. A complete implementation of the methods described is demonstrated in a non-hydrostatic setting.
Finding a balance between accuracy and computational effort for modeling biomineralization
NASA Astrophysics Data System (ADS)
Hommel, Johannes; Ebigbo, Anozie; Gerlach, Robin; Cunningham, Alfred B.; Helmig, Rainer; Class, Holger
2016-04-01
One of the key issues of underground gas storage is the long-term security of the storage site. Amongst the different storage mechanisms, cap-rock integrity is crucial for preventing leakage of the stored gas due to buoyancy into shallower aquifers or, ultimately, the atmosphere. This leakage would reduce the efficiency of underground gas storage and pose a threat to the environment. Ureolysis-driven, microbially induced calcite precipitation (MICP) is one of the technologies in the focus of current research aiming at mitigation of potential leakage by sealing high-permeability zones in cap rocks. Previously, a numerical model, capable of simulating two-phase multi-component reactive transport, including the most important processes necessary to describe MICP, was developed and validated against experiments in Ebigbo et al. [2012]. The microbial ureolysis kinetics implemented in the model was improved based on new experimental findings and the model was recalibrated using improved experimental data in Hommel et al. [2015]. This increased the ability of the model to predict laboratory experiments while simplifying some of the reaction rates. However, the complexity of the model is still high which leads to high computation times even for relatively small domains. The high computation time prohibits the use of the model for the design of field-scale applications of MICP. Various approaches to reduce the computational time are possible, e.g. using optimized numerical schemes or simplified engineering models. Optimized numerical schemes have the advantage of conserving the detailed equations, as they save computation time by an improved solution strategy. Simplified models are more an engineering approach, since they neglect processes of minor impact and focus on the processes which have the most influence on the model results. This allows also for investigating the influence of a certain process on the overall MICP, which increases the insights into the interactions
Efficient Computation of Closed-loop Frequency Response for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1997-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, full-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open and closed loop loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, a speed-up of almost two orders of magnitude was observed while accuracy improved by up to 5 decimal places.
Efficient Computational Techniques for Electromagnetic Propagation and Scattering.
NASA Astrophysics Data System (ADS)
Wagner, Robert Louis
Electromagnetic propagation and scattering problems are important in many application areas such as communications, high-speed circuitry, medical imaging, geophysical remote sensing, nondestructive testing, and radar. This thesis develops several new techniques for the efficient computer solution of such problems. Most of this thesis deals with the efficient solution of electromagnetic scattering problems formulated as surface integral equations. A standard method of moments (MOM) formulation is used to reduce the problem to the solution of a dense, N times N matrix equation, where N is the number of surface current unknowns. An iterative solution technique is used, requiring the computation of many matrix-vector multiplications. Techniques developed for this problem include the ray-propagation fast multipole algorithm (RPFMA), which is a simple, non-nested, physically intuitive technique based on the fast multipole method (FMM). The RPFMA is implemented for two-dimensional surface integral equations, and reduces the cost of a matrix-vector multiplication from O(N^2) to O(N^ {4/3}). The use of wavelets is also studied for the solution of two-dimensional surface integral equations. It is shown that the use of wavelets as basis functions produces a MOM matrix with substantial sparsity. However, unlike the RPFMA, the use of a wavelet basis does not reduce the computational complexity of the problem. In other words, the sparse MOM matrix in the wavelet basis still has O(N ^2) significant entries. The fast multipole method-fast Fourier transform (FMM-FFT) method is developed to compute the scattering of an electromagnetic wave from a two-dimensional rough surface. The resulting algorithm computes a matrix-vector multiply in O(N log N) operations. This algorithm is shown to be more efficient than another O(N log N) algorithm, the multi-level fast multipole algorithm (MLFMA), for surfaces of small height. For surfaces with larger roughness, the MLFMA is found to be more
Efficient computation of partial expected value of sample information using Bayesian approximation.
Brennan, Alan; Kharroubi, Samer A
2007-01-01
We describe a novel process for transforming the efficiency of partial expected value of sample information (EVSI) computation in decision models. Traditional EVSI computation begins with Monte Carlo sampling to produce new simulated data-sets with a specified sample size. Each data-set is synthesised with prior information to give posterior distributions for model parameters, either via analytic formulae or a further Markov Chain Monte Carlo (MCMC) simulation. A further 'inner level' Monte Carlo sampling then quantifies the effect of the simulated data on the decision. This paper describes a novel form of Bayesian Laplace approximation, which can be replace both the Bayesian updating and the inner Monte Carlo sampling to compute the posterior expectation of a function. We compare the accuracy of EVSI estimates in two case study cost-effectiveness models using 1st and 2nd order versions of our approximation formula, the approximation of Tierney and Kadane, and traditional Monte Carlo. Computational efficiency gains depend on the complexity of the net benefit functions, the number of inner level Monte Carlo samples used, and the requirement or otherwise for MCMC methods to produce the posterior distributions. This methodology provides a new and valuable approach for EVSI computation in health economic decision models and potential wider benefits in many fields requiring Bayesian approximation. PMID:16945438
Efficient O(N) recursive computation of the operational space inertial matrix
Lilly, K.W.; Orin, D.E.
1993-09-01
The operational space inertia matrix {Lambda} reflects the dynamic properties of a robot manipulator to its tip. In the control domain, it may be used to decouple force and/or motion control about the manipulator workspace axes. The matrix {Lambda} also plays an important role in the development of efficient algorithms for the dynamic simulation of closed-chain robotic mechanisms, including simple closed-chain mechanisms such as multiple manipulator systems and walking machines. The traditional approach used to compute {Lambda} has a computational complexity of O(N{sup 3}) for an N degree-of-freedom manipulator. This paper presents the development of a recursive algorithm for computing the operational space inertia matrix (OSIM) that reduces the computational complexity to O(N). This algorithm, the inertia propagation method, is based on a single recursion that begins at the base of the manipulator and progresses out to the last link. Also applicable to redundant systems and mechanisms with multiple-degree-of-freedom joints, the inertia propagation method is the most efficient method known for computing {Lambda} for N {>=} 6. The numerical accuracy of the algorithm is discussed for a PUMA 560 robot with a fixed base.
Improving computational efficiency of Monte Carlo simulations with variance reduction
Turner, A.
2013-07-01
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)
Efficient MATLAB computations with sparse and factored tensors.
Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)
2006-12-01
In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.
NASA Astrophysics Data System (ADS)
Gulani, Vikas; Weber, Thomas; Neuberger, Thomas; Webb, Andrew G.
2005-12-01
In high-field NMR microscopy rapid single-shot imaging methods, for example, echo planar imaging, cannot be used for determination of the apparent diffusion tensor (ADT) due to large magnetic susceptibility effects. We propose a pulse sequence in which a diffusion-weighted spin-echo is followed by multiple gradient-echoes with additional diffusion weighting. These additional echoes can be used to calculate the ADT and T2∗ maps. We show here that this results in modest but consistent improvements in the accuracy of ADT determination within a given total data acquisition time. The method is tested on excised, chemically fixed rat spinal cords.
NASA Astrophysics Data System (ADS)
Paracha, Shazad; Eynon, Benjamin; Noyes, Ben F.; Nhiev, Anthony; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan; Ham, Young Mog; Uzzel, Doug; Green, Michael; MacDonald, Susan; Morgan, John
2014-04-01
Advanced IC fabs must inspect critical reticles on a frequent basis to ensure high wafer yields. These necessary requalification inspections have traditionally carried high risk and expense. Manually reviewing sometimes hundreds of potentially yield-limiting detections is a very high-risk activity due to the likelihood of human error; the worst of which is the accidental passing of a real, yield-limiting defect. Painfully high cost is incurred as a result, but high cost is also realized on a daily basis while reticles are being manually classified on inspection tools since these tools often remain in a non-productive state during classification. An automatic defect analysis system (ADAS) has been implemented at a 20nm node wafer fab to automate reticle defect classification by simulating each defect's printability under the intended illumination conditions. In this paper, we have studied and present results showing the positive impact that an automated reticle defect classification system has on the reticle requalification process; specifically to defect classification speed and accuracy. To verify accuracy, detected defects of interest were analyzed with lithographic simulation software and compared to the results of both AIMS™ optical simulation and to actual wafer prints.
Evaluating cost-efficiency and accuracy of hunter harvest survey designs
Lukacs, P.M.; Gude, J.A.; Russell, R.E.; Ackerman, B.B.
2011-01-01
Effective management of harvested wildlife often requires accurate estimates of the number of animals harvested annually by hunters. A variety of techniques exist to obtain harvest data, such as hunter surveys, check stations, mandatory reporting requirements, and voluntary reporting of harvest. Agencies responsible for managing harvested wildlife such as deer (Odocoileus spp.), elk (Cervus elaphus), and pronghorn (Antilocapra americana) are challenged with balancing the cost of data collection versus the value of the information obtained. We compared precision, bias, and relative cost of several common strategies, including hunter self-reporting and random sampling, for estimating hunter harvest using a realistic set of simulations. Self-reporting with a follow-up survey of hunters who did not report produces the best estimate of harvest in terms of precision and bias, but it is also, by far, the most expensive technique. Self-reporting with no followup survey risks very large bias in harvest estimates, and the cost increases with increased response rate. Probability-based sampling provides a substantial cost savings, though accuracy can be affected by nonresponse bias. We recommend stratified random sampling with a calibration estimator used to reweight the sample based on the proportions of hunters responding in each covariate category as the best option for balancing cost and accuracy. ?? 2011 The Wildlife Society.
Energy Efficient Biomolecular Simulations with FPGA-based Reconfigurable Computing
Hampton, Scott S; Agarwal, Pratul K
2010-05-01
Reconfigurable computing (RC) is being investigated as a hardware solution for improving time-to-solution for biomolecular simulations. A number of popular molecular dynamics (MD) codes are used to study various aspects of biomolecules. These codes are now capable of simulating nanosecond time-scale trajectories per day on conventional microprocessor-based hardware, but biomolecular processes often occur at the microsecond time-scale or longer. A wide gap exists between the desired and achievable simulation capability; therefore, there is considerable interest in alternative algorithms and hardware for improving the time-to-solution of MD codes. The fine-grain parallelism provided by Field Programmable Gate Arrays (FPGA) combined with their low power consumption make them an attractive solution for improving the performance of MD simulations. In this work, we use an FPGA-based coprocessor to accelerate the compute-intensive calculations of LAMMPS, a popular MD code, achieving up to 5.5 fold speed-up on the non-bonded force computations of the particle mesh Ewald method and up to 2.2 fold speed-up in overall time-to-solution, and potentially an increase by a factor of 9 in power-performance efficiencies for the pair-wise computations. The results presented here provide an example of the multi-faceted benefits to an application in a heterogeneous computing environment.
Gou, Zhenkun; Kuznetsov, Igor B.
2009-01-01
Methods for computational inference of DNA-binding residues in DNA-binding proteins are usually developed using classification techniques trained to distinguish between binding and non-binding residues on the basis of known examples observed in experimentally determined high-resolution structures of protein-DNA complexes. What degree of accuracy can be expected when a computational methods is applied to a particular novel protein remains largely unknown. We test the utility of classification methods on the example of Kernel Logistic Regression (KLR) predictors of DNA-binding residues. We show that predictors that utilize sequence properties of proteins can successfully predict DNA-binding residues in proteins from a novel structural class. We use Multiple Linear Regression (MLR) to establish a quantitative relationship between protein properties and the expected accuracy of KLR predictors. Present results indicate that in the case of novel proteins the expected accuracy provided by an MLR model is close to the actual accuracy and can be used to assess the overall confidence of the prediction. PMID:20209034
Benighaus, Tobias; Thiel, Walter
2008-10-14
We report the implementation of the generalized solvent boundary potential (GSBP) [ Im , W. , Bernèche , S. , and Roux , B. J. Chem. Phys. 2001, 114, 2924 ] in the framework of semiempirical hybrid quantum mechanical/molecular mechanical (QM/MM) methods. Application of the GSBP is connected with a significant overhead that is dominated by numerical solutions of the Poisson-Boltzmann equation for continuous charge distributions. Three approaches are presented that accelerate computation of the values at the boundary of the simulation box and in the interior of the macromolecule and solvent. It is shown that these methods reduce the computational overhead of the GSBP significantly with only minimal loss of accuracy. The accuracy of the GSBP to represent long-range electrostatic interactions is assessed for an extensive set of its inherent parameters, and a set of optimal parameters is defined. On this basis, the overhead and the savings of the GSBP are quantified for model systems of different sizes in the range of 7000 to 40 000 atoms. We find that the savings compensate for the overhead in systems larger than 12 500 atoms. Beyond this system size, the GSBP reduces the computational cost significantly, by 70% and more for large systems (>25 000 atoms). PMID:26620166
Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Goodrich, John W.; Dyson, Rodger W.
1999-01-01
The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that
Improving robustness and computational efficiency using modern C++
NASA Astrophysics Data System (ADS)
Paterno, M.; Kowalkowski, J.; Green, C.
2014-06-01
For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.
Improving robustness and computational efficiency using modern C++
Paterno, M.; Kowalkowski, J.; Green, C.
2014-01-01
For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.
NASA Astrophysics Data System (ADS)
Schubert, J. E.; Sanders, B. F.
2011-12-01
Urban landscapes are at the forefront of current research efforts in the field of flood inundation modeling for two major reasons. First, urban areas hold relatively large economic and social importance and as such it is imperative to avoid or minimize future damages. Secondly, urban flooding is becoming more frequent as a consequence of continued development of impervious surfaces, population growth in cities, climate change magnifying rainfall intensity, sea level rise threatening coastal communities, and decaying flood defense infrastructure. In reality urban landscapes are particularly challenging to model because they include a multitude of geometrically complex features. Advances in remote sensing technologies and geographical information systems (GIS) have promulgated fine resolution data layers that offer a site characterization suitable for urban inundation modeling including a description of preferential flow paths, drainage networks and surface dependent resistances to overland flow. Recent research has focused on two-dimensional modeling of overland flow including within-curb flows and over-curb flows across developed parcels. Studies have focused on mesh design and parameterization, and sub-grid models that promise improved performance relative to accuracy and/or computational efficiency. This presentation addresses how fine-resolution data, available in Los Angeles County, are used to parameterize, initialize and execute flood inundation models for the 1963 Baldwin Hills dam break. Several commonly used model parameterization strategies including building-resistance, building-block and building hole are compared with a novel sub-grid strategy based on building-porosity. Performance of the models is assessed based on the accuracy of depth and velocity predictions, execution time, and the time and expertise required for model set-up. The objective of this study is to assess field-scale applicability, and to obtain a better understanding of advantages
Timmons, Adela C; Preacher, Kristopher J
2015-01-01
The timing (spacing) of assessments is an important component of longitudinal research. The purpose of the present study is to determine methods of timing the collection of longitudinal data that provide better parameter recovery in mixed effects nonlinear growth modeling. A simulation study was conducted, varying function type, as well as the number of measurement occasions, in order to examine the effect of timing on the accuracy and efficiency of parameter estimates. The number of measurement occasions was associated with greater efficiency for all functional forms and was associated with greater accuracy for the intrinsically nonlinear functions. In general, concentrating measurement occasions toward the left or at the extremes was associated with increased efficiency when estimating the intercepts of intrinsically linear functions, and concentrating values where the curvature of the function was greatest generally resulted in the best recovery for intrinsically nonlinear functions. Results from this study can be used in conjunction with theory to improve the design of longitudinal research studies. In addition, an R program is provided for researchers to run customized simulations to identify optimal sampling schedules for their own research.
Experiences With Efficient Methodologies for Teaching Computer Programming to Geoscientists
NASA Astrophysics Data System (ADS)
Jacobs, Christian T.; Gorman, Gerard J.; Rees, Huw E.; Craig, Lorraine E.
2016-08-01
Computer programming was once thought of as a skill required only by professional software developers. But today, given the ubiquitous nature of computation and data science it is quickly becoming necessary for all scientists and engineers to have at least a basic knowledge of how to program. Teaching how to program, particularly to those students with little or no computing background, is well-known to be a difficult task. However, there is also a wealth of evidence-based teaching practices for teaching programming skills which can be applied to greatly improve learning outcomes and the student experience. Adopting these practices naturally gives rise to greater learning efficiency - this is critical if programming is to be integrated into an already busy geoscience curriculum. This paper considers an undergraduate computer programming course, run during the last 5 years in the Department of Earth Science and Engineering at Imperial College London. The teaching methodologies that were used each year are discussed alongside the challenges that were encountered, and how the methodologies affected student performance. Anonymised student marks and feedback are used to highlight this, and also how the adjustments made to the course eventually resulted in a highly effective learning environment.
Exploiting stoichiometric redundancies for computational efficiency and network reduction.
Ingalls, Brian P; Bembenek, Eric
2015-01-01
Analysis of metabolic networks typically begins with construction of the stoichiometry matrix, which characterizes the network topology. This matrix provides, via the balance equation, a description of the potential steady-state flow distribution. This paper begins with the observation that the balance equation depends only on the structure of linear redundancies in the network, and so can be stated in a succinct manner, leading to computational efficiencies in steady-state analysis. This alternative description of steady-state behaviour is then used to provide a novel method for network reduction, which complements existing algorithms for describing intracellular networks in terms of input-output macro-reactions (to facilitate bioprocess optimization and control). Finally, it is demonstrated that this novel reduction method can be used to address elementary mode analysis of large networks: the modes supported by a reduced network can capture the input-output modes of a metabolic module with significantly reduced computational effort.
Exploiting stoichiometric redundancies for computational efficiency and network reduction.
Ingalls, Brian P; Bembenek, Eric
2015-01-01
Analysis of metabolic networks typically begins with construction of the stoichiometry matrix, which characterizes the network topology. This matrix provides, via the balance equation, a description of the potential steady-state flow distribution. This paper begins with the observation that the balance equation depends only on the structure of linear redundancies in the network, and so can be stated in a succinct manner, leading to computational efficiencies in steady-state analysis. This alternative description of steady-state behaviour is then used to provide a novel method for network reduction, which complements existing algorithms for describing intracellular networks in terms of input-output macro-reactions (to facilitate bioprocess optimization and control). Finally, it is demonstrated that this novel reduction method can be used to address elementary mode analysis of large networks: the modes supported by a reduced network can capture the input-output modes of a metabolic module with significantly reduced computational effort. PMID:25547516
Exploiting stoichiometric redundancies for computational efficiency and network reduction
Ingalls, Brian P.; Bembenek, Eric
2015-01-01
Abstract Analysis of metabolic networks typically begins with construction of the stoichiometry matrix, which characterizes the network topology. This matrix provides, via the balance equation, a description of the potential steady-state flow distribution. This paper begins with the observation that the balance equation depends only on the structure of linear redundancies in the network, and so can be stated in a succinct manner, leading to computational efficiencies in steady-state analysis. This alternative description of steady-state behaviour is then used to provide a novel method for network reduction, which complements existing algorithms for describing intracellular networks in terms of input-output macro-reactions (to facilitate bioprocess optimization and control). Finally, it is demonstrated that this novel reduction method can be used to address elementary mode analysis of large networks: the modes supported by a reduced network can capture the input-output modes of a metabolic module with significantly reduced computational effort. PMID:25547516
Adding computationally efficient realism to Monte Carlo turbulence simulation
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1985-01-01
Frequently in aerospace vehicle flight simulation, random turbulence is generated using the assumption that the craft is small compared to the length scales of turbulence. The turbulence is presumed to vary only along the flight path of the vehicle but not across the vehicle span. The addition of the realism of three-dimensionality is a worthy goal, but any such attempt will not gain acceptance in the simulator community unless it is computationally efficient. A concept for adding three-dimensional realism with a minimum of computational complexity is presented. The concept involves the use of close rational approximations to irrational spectra and cross-spectra so that systems of stable, explicit difference equations can be used to generate the turbulence.
Meyer, Juergen . E-mail: juergen.meyer@canterbury.ac.nz; Wilbert, Juergen; Baier, Kurt; Guckenberger, Matthias; Richter, Anne; Sauer, Otto; Flentje, Michael
2007-03-15
Purpose: To scrutinize the positioning accuracy and reproducibility of a commercial hexapod robot treatment table (HRTT) in combination with a commercial cone-beam computed tomography system for image-guided radiotherapy (IGRT). Methods and Materials: The mechanical stability of the X-ray volume imaging (XVI) system was tested in terms of reproducibility and with a focus on the moveable parts, i.e., the influence of kV panel and the source arm on the reproducibility and accuracy of both bone and gray value registration using a head-and-neck phantom. In consecutive measurements the accuracy of the HRTT for translational, rotational, and a combination of translational and rotational corrections was investigated. The operational range of the HRTT was also determined and analyzed. Results: The system performance of the XVI system alone was very stable with mean translational and rotational errors of below 0.2 mm and below 0.2{sup o}, respectively. The mean positioning accuracy of the HRTT in combination with the XVI system summarized over all measurements was below 0.3 mm and below 0.3{sup o} for translational and rotational corrections, respectively. The gray value match was more accurate than the bone match. Conclusion: The XVI image acquisition and registration procedure were highly reproducible. Both translational and rotational positioning errors can be corrected very precisely with the HRTT. The HRTT is therefore well suited to complement cone-beam computed tomography to take full advantage of position correction in six degrees of freedom for IGRT. The combination of XVI and the HRTT has the potential to improve the accuracy of high-precision treatments.
Francisco, Juan Carlos; Cohan, Frederick M; Krizanc, Danny
2014-01-01
Identification of closely related, ecologically distinct populations of bacteria would benefit microbiologists working in many fields including systematics, epidemiology and biotechnology. Several laboratories have recently developed algorithms aimed at demarcating such 'ecotypes'. We examine the ability of four of these algorithms to correctly identify ecotypes from sequence data. We tested the algorithms on synthetic sequences, with known history and habitat associations, generated under the stable ecotype model and on data from Bacillus strains isolated from Death Valley where previous work has confirmed the existence of multiple ecotypes. We found that one of the algorithms (ecotype simulation) performs significantly better than the others (AdaptML, GMYC, BAPS) in both instances. Unfortunately, it was also shown to be the least efficient of the four. While ecotype simulation is the most accurate, it is by a large margin the slowest of the algorithms tested. Attempts at improving its efficiency are underway.
Computationally efficient strategies to perform anomaly detection in hyperspectral images
NASA Astrophysics Data System (ADS)
Rossi, Alessandro; Acito, Nicola; Diani, Marco; Corsini, Giovanni
2012-11-01
In remote sensing, hyperspectral sensors are effectively used for target detection and recognition because of their high spectral resolution that allows discrimination of different materials in the sensed scene. When a priori information about the spectrum of the targets of interest is not available, target detection turns into anomaly detection (AD), i.e. searching for objects that are anomalous with respect to the scene background. In the field of AD, anomalies can be generally associated to observations that statistically move away from background clutter, being this latter intended as a local neighborhood surrounding the observed pixel or as a large part of the image. In this context, many efforts have been put to reduce the computational load of AD algorithms so as to furnish information for real-time decision making. In this work, a sub-class of AD methods is considered that aim at detecting small rare objects that are anomalous with respect to their local background. Such techniques not only are characterized by mathematical tractability but also allow the design of real-time strategies for AD. Within these methods, one of the most-established anomaly detectors is the RX algorithm which is based on a local Gaussian model for background modeling. In the literature, the RX decision rule has been employed to develop computationally efficient algorithms implemented in real-time systems. In this work, a survey of computationally efficient methods to implement the RX detector is presented where advanced algebraic strategies are exploited to speed up the estimate of the covariance matrix and of its inverse. The comparison of the overall number of operations required by the different implementations of the RX algorithms is given and discussed by varying the RX parameters in order to show the computational improvements achieved with the introduced algebraic strategy.
Chai, Rifai; Tran, Yvonne; Craig, Ashley; Ling, Sai Ho; Nguyen, Hung T
2014-01-01
A system using electroencephalography (EEG) signals could enhance the detection of mental fatigue while driving a vehicle. This paper examines the classification between fatigue and alert states using an autoregressive (AR) model-based power spectral density (PSD) as the features extraction method and fuzzy particle swarm optimization with cross mutated of artificial neural network (FPSOCM-ANN) as the classification method. Using 32-EEG channels, results indicated an improved overall specificity from 76.99% to 82.02%, an improved sensitivity from 74.92 to 78.99% and an improved accuracy from 75.95% to 80.51% when compared to previous studies. The classification using fewer EEG channels, with eleven frontal sites resulted in 77.52% for specificity, 73.78% for sensitivity and 75.65% accuracy being achieved. For ergonomic reasons, the configuration with fewer EEG channels will enhance capacity to monitor fatigue as there is less set-up time required. PMID:25570210
On the Accuracy of Double Scattering Approximation for Atmospheric Polarization Computations
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Marshak, Alexander L.
2011-01-01
Interpretation of multi-angle spectro-polarimetric data in remote sensing of atmospheric aerosols require fast and accurate methods of solving the vector radiative transfer equation (VRTE). The single and double scattering approximations could provide an analytical framework for the inversion algorithms and are relatively fast, however accuracy assessments of these approximations for the aerosol atmospheres in the atmospheric window channels have been missing. This paper provides such analysis for a vertically homogeneous aerosol atmosphere with weak and strong asymmetry of scattering. In both cases, the double scattering approximation gives a high accuracy result (relative error approximately 0.2%) only for the low optical path - 10(sup -2) As the error rapidly grows with optical thickness, a full VRTE solution is required for the practical remote sensing analysis. It is shown that the scattering anisotropy is not important at low optical thicknesses neither for reflected nor for transmitted polarization components of radiation.
Efficient quantum algorithm for computing n-time correlation functions.
Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E
2014-07-11
We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.
Efficient parallel global garbage collection on massively parallel computers
Kamada, Tomio; Matsuoka, Satoshi; Yonezawa, Akinori
1994-12-31
On distributed-memory high-performance MPPs where processors are interconnected by an asynchronous network, efficient Garbage Collection (GC) becomes difficult due to inter-node references and references within pending, unprocessed messages. The parallel global GC algorithm (1) takes advantage of reference locality, (2) efficiently traverses references over nodes, (3) admits minimum pause time of ongoing computations, and (4) has been shown to scale up to 1024 node MPPs. The algorithm employs a global weight counting scheme to substantially reduce message traffic. The two methods for confirming the arrival of pending messages are used: one counts numbers of messages and the other uses network `bulldozing.` Performance evaluation in actual implementations on a multicomputer with 32-1024 nodes, Fujitsu AP1000, reveals various favorable properties of the algorithm.
IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report
William M. Bond; Salih Ersayin
2007-03-30
This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency of individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern
Study of ephemeris accuracy of the minor planets. [using computer based data systems
NASA Technical Reports Server (NTRS)
Brooks, D. R.; Cunningham, L. E.
1974-01-01
The current state of minor planet ephemerides was assessed, and the means for providing and updating these emphemerides for use by both the mission planner and the astronomer were developed. A system of obtaining data for all the numbered minor planets was planned, and computer programs for its initial mechanization were developed. The computer based system furnishes the osculating elements for all of the numbered minor planets at an adopted date of October 10, 1972, and at every 400 day interval over the years of interest. It also furnishes the perturbations in the rectangular coordinates relative to the osculating elements at every 4 day interval. Another computer program was designed and developed to integrate the perturbed motion of a group of 50 minor planets simultaneously. Sampled data resulting from the operation of the computer based systems are presented.
[Techniques to enhance the accuracy and efficiency of injections of the face in aesthetic medicine].
Manfrédi, P-R; Hersant, B; Bosc, R; Noel, W; Meningaud, J-P
2016-02-01
The common principle of injections in esthetic medicine is to treat and to prevent the signs of aging with minimal doses and with more precision and efficiency. This relies on functional, histological, ultrasound or electromyographic analysis of the soft tissues and of the mechanisms of facial skin aging (fine lines, wrinkles, hollows). These injections may be done with hyaluronic acid (HA) and botulinum toxin. The aim of this technical note was to present four delivery techniques allowing for more precision and low doses of product. The techniques of "vacuum", "interpores" and "blanching" will be addressed for HA injection and the concept of "Face Recurve" for botulinum toxin injection.
Hsu, Sam Sheng-Pin; Gateno, Jaime; Bell, R. Bryan; Hirsch, David L.; Markiewicz, Michael R.; Teichgraeber, John F.; Zhou, Xiaobo; Xia, James J.
2012-01-01
Purpose The purpose of this prospective multicenter study was to assess the accuracy of a computer-aided surgical simulation (CASS) protocol for orthognathic surgery. Materials and Methods The accuracy of the CASS protocol was assessed by comparing planned and postoperative outcomes of 65 consecutive patients enrolled from 3 centers. Computer-generated surgical splints were used for all patients. For the genioplasty, one center utilized computer-generated chin templates to reposition the chin segment only for patients with asymmetry. Standard intraoperative measurements were utilized without the chin templates for the remaining patients. The primary outcome measurements were linear and angular differences for the maxilla, mandible and chin when the planned and postoperative models were registered at the cranium. The secondary outcome measurements were: maxillary dental midline difference between the planned and postoperative positions; and linear and angular differences of the chin segment between the groups with and without the use of the template. The latter was measured when the planned and postoperative models were registered at mandibular body. Statistical analyses were performed, and the accuracy was reported using root mean square deviation (RMSD) and Bland and Altman's method for assessing measurement agreement. Results In the primary outcome measurements, there was no statistically significant difference among the 3 centers for the maxilla and mandible. The largest RMSD was 1.0mm and 1.5° for the maxilla, and 1.1mm and 1.8° for the mandible. For the chin, there was a statistically significant difference between the groups with and without the use of the chin template. The chin template group showed excellent accuracy with largest positional RMSD of 1.0mm and the largest orientational RSMD of 2.2°. However, larger variances were observed in the group not using the chin template. This was significant in anteroposterior and superoinferior directions, as in
Koethe, Yilun; Xu, Sheng; Velusamy, Gnanasekar; Wood, Bradford J.; Venkatesan, Aradhana M.
2014-01-01
Objective To compare the accuracy of a robotic interventional radiologist (IR) assistance platform with a standard freehand technique for computed-tomography (CT)-guided biopsy and simulated radiofrequency ablation (RFA). Methods The accuracy of freehand single-pass needle insertions into abdominal phantoms was compared with insertions facilitated with the use of a robotic assistance platform (n = 20 each). Post-procedural CTs were analysed for needle placement error. Percutaneous RFA was simulated by sequentially placing five 17-gauge needle introducers into 5-cm diameter masses (n = 5) embedded within an abdominal phantom. Simulated ablations were planned based on pre-procedural CT, before multi-probe placement was executed freehand. Multi-probe placement was then performed on the same 5-cm mass using the ablation planning software and robotic assistance. Post-procedural CTs were analysed to determine the percentage of untreated residual target. Results Mean needle tip-to-target errors were reduced with use of the IR assistance platform (both P < 0.0001). Reduced percentage residual tumour was observed with treatment planning (P = 0.02). Conclusion Improved needle accuracy and optimised probe geometry are observed during simulated CT-guided biopsy and percutaneous ablation with use of a robotic IR assistance platform. This technology may be useful for clinical CT-guided biopsy and RFA, when accuracy may have an impact on outcome. PMID:24220755
A computational efficient modelling of laminar separation bubbles
NASA Technical Reports Server (NTRS)
Dini, Paolo; Maughmer, Mark D.
1990-01-01
In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.
Frezza-Buet, Hervé
2014-12-01
This paper presents a vector quantization process that can be applied online to a stream of inputs. It enables to set up and maintain a dynamical representation of the current information in the stream as a topology preserving graph of prototypical values, as well as a velocity field. The algorithm relies on the formulation of the accuracy of the quantization process, that allows for both the updating of the number of prototypes according to the stream evolution and the stabilization of the representation from which velocities can be extracted. A video processing application is presented. PMID:25248032
Efficient Computation of the Topology of Level Sets
Pascucci, V; Cole-McLaughlin, K
2002-07-19
This paper introduces two efficient algorithms that compute the Contour Tree of a 3D scalar field F and its augmented version with the Betti numbers of each isosurface. The Contour Tree is a fundamental data structure in scientific visualization that is used to pre-process the domain mesh to allow optimal computation of isosurfaces with minimal storage overhead. The Contour Tree can be also used to build user interfaces reporting the complete topological characterization of a scalar field, as shown in Figure 1. In the first part of the paper we present a new scheme that augments the Contour Tree with the Betti numbers of each isocontour in linear time. We show how to extend the scheme introduced in 3 with the Betti number computation without increasing its complexity. Thus we improve on the time complexity from our previous approach 8 from 0(m log m) to 0(n log n+m), where m is the number of tetrahedra and n is the number of vertices in the domain of F. In the second part of the paper we introduce a new divide and conquer algorithm that computes the Augmented Contour Tree for scalar fields defined on rectilinear grids. The central part of the scheme computes the output contour tree by merging two intermediate contour trees and is independent of the interpolant. In this way we confine any knowledge regarding a specific interpolant to an oracle that computes the tree for a single cell. We have implemented this oracle for the trilinear interpolant and plan to replace it with higher order interpolants when needed. The complexity of the scheme is O(n + t log n), where t is the number of critical points of F. This allows for the first time to compute the Contour Tree in linear time in many practical cases when t = O(n{sup 1-e}). We report the running times for a parallel implementation of our algorithm, showing good scalability with the number of processors.
Computationally efficient implementation of combustion chemistry in parallel PDF calculations
NASA Astrophysics Data System (ADS)
Lu, Liuyan; Lantz, Steven R.; Ren, Zhuyin; Pope, Stephen B.
2009-08-01
In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f_mpi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel
Fine, Jeffrey J; Hopkins, Christie B; Ruff, Nicol; Newton, F Carter
2006-01-15
Cardiovascular computed tomography (CVCT) with the recently released 64-slice technology increases spatial resolution and decreases acquisition times and slice thickness. We investigated the accuracy of 64-slice CVCT in relation to catheter angiography. We studied 66 sequential subjects who underwent 64-slice CVCT and catheter angiography within 30 days. Accuracy results were 94% for interpretable images, 95% for sensitivity, 96% for specificity, 97% for positive predictive value, and 92% for negative predictive value for lesions with >50% stenosis. We found 100% agreement between 64-slice CVCT and catheterization among vein graft evaluations (9 of 9). These metrics are vastly improved from the 16-slice generation and support 64-slice CVCT as a reliable diagnostic tool.
Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Brown, David A.
New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated
Shirley Reduced Basis DFT: plane-wave generality and accuracy at reduced computational cost
NASA Astrophysics Data System (ADS)
Hutchinson, Maxwell; Prendergast, David
2014-03-01
The Shirley Reduced Basis (SRB) provides a means for performing density functional theory electronic structure calculations with plane-wave accuracy and generality in a basis of significantly reduced size. The SRB is comprised of linear combinations of periodic Bloch functions sampled coarsely over the Brillouin zone (BZ) and selected for maximal information content using proper orthogonal decomposition [E. Shirley, Phys. Rev. B 54, 464 (1996)]. A basis produced from only order 10 samples, lying on the BZ boundary, is able to reproduce energies and stresses to sub meV and kbar accuracy, respectively, with order 10 basis functions per electronic band. Unlike other electronic structure bases of similar sizes, the SRB is adaptive and automatic, making no model assumptions beyond the use of pseudopotentials. We provide the first self-consistent implementation of this approach, enabling both relaxations and molecular dynamics. We demonstrate the usefulness of the method on a variety of physical systems, from crystalline solids to reduced dimensional systems under periodic boundary conditions, realizing order of magnitude performance improvements while kept within physically relevant error tolerances. M.H. acknowledges support from the DoE CSGF Program, Grant No. DE-FG02-97ER25308. Work by D.P. was performed at the Molecular Foundry, supported by the Office of Science, Office of Basic Energy Sciences, DoE under Contract No. DE-AC02-05CH11231.
Ganguly, R; Ruprecht, A; Vincent, S; Hellstein, J; Timmons, S; Qian, F
2011-01-01
Objectives The aim of this study was to determine the geometric accuracy of cone beam CT (CBCT)-based linear measurements of bone height obtained with the Galileos CBCT (Sirona Dental Systems Inc., Bensheim, Hessen, Germany) in the presence of soft tissues. Methods Six embalmed cadaver heads were imaged with the Galileos CBCT unit subsequent to placement of radiopaque fiduciary markers over the buccal and lingual cortical plates. Electronic linear measurements of bone height were obtained using the Sirona software. Physical measurements were obtained with digital calipers at the same location. This distance was compared on all six specimens bilaterally to determine accuracy of the image measurements. Results The findings showed no statistically significant difference between the imaging and physical measurements (P > 0.05) as determined by a paired sample t-test. The intraclass correlation was used to measure the intrarater reliability of repeated measures and there was no statistically significant difference between measurements performed at the same location (P > 0.05). Conclusions The Galileos CBCT image-based linear measurement between anatomical structures within the mandible in the presence of soft tissues is sufficiently accurate for clinical use. PMID:21697155
Eskandarloo, Amir; Asl, Amin Mahdavi; Jalalzadeh, Mohsen; Tayari, Maryam; Hosseinipanah, Mohammad; Fardmal, Javad; Shokri, Abbas
2016-01-01
Accurate and early diagnosis of vertical root fractures (VRFs) is imperative to prevent extensive bone loss and unnecessary endodontic and prosthodontic treatments. The aim of this study was to assess the effect of time lapse on the diagnostic accuracy of cone beam computed tomography (CBCT) for VRFs in endodontically treated dog's teeth. Forty-eight incisors and premolars of three adult male dogs underwent root canal therapy. The teeth were assigned to two groups: VRFs were artificially induced in the first group (n=24) while the teeth in the second group remained intact (n=24). The CBCT scans were obtained by NewTom 3G unit immediately after inducing VRFs and after one, two, three, four, eight, 12 and 16 weeks. Three oral and maxillofacial radiologists blinded to the date of radiographs assessed the presence/absence of VRFs on CBCT scans. The sensitivity, specificity and accuracy values were calculated and data were analyzed using SPSS v.16 software and ANOVA. The total accuracy of detection of VRFs immediately after surgery, one, two, three, four, eight, 12 and 16 weeks was 67.3%, 68.7%, 66.6%, 64.6%, 64.5%, 69.4%, 68.7%, 68% respectively. The effect of time lapse on detection of VRFs was not significant (p>0.05). Overall sensitivity, specificity and accuracy of CBCT for detection of VRFs were 74.3%, 62.2%, 67.2% respectively. Cone beam computed tomography is a valuable tool for detection of VRFs. Time lapse (four months) had no effect on detection of VRFs on CBCT scans. PMID:27007339
Eskandarloo, Amir; Asl, Amin Mahdavi; Jalalzadeh, Mohsen; Tayari, Maryam; Hosseinipanah, Mohammad; Fardmal, Javad; Shokri, Abbas
2016-01-01
Accurate and early diagnosis of vertical root fractures (VRFs) is imperative to prevent extensive bone loss and unnecessary endodontic and prosthodontic treatments. The aim of this study was to assess the effect of time lapse on the diagnostic accuracy of cone beam computed tomography (CBCT) for VRFs in endodontically treated dog's teeth. Forty-eight incisors and premolars of three adult male dogs underwent root canal therapy. The teeth were assigned to two groups: VRFs were artificially induced in the first group (n=24) while the teeth in the second group remained intact (n=24). The CBCT scans were obtained by NewTom 3G unit immediately after inducing VRFs and after one, two, three, four, eight, 12 and 16 weeks. Three oral and maxillofacial radiologists blinded to the date of radiographs assessed the presence/absence of VRFs on CBCT scans. The sensitivity, specificity and accuracy values were calculated and data were analyzed using SPSS v.16 software and ANOVA. The total accuracy of detection of VRFs immediately after surgery, one, two, three, four, eight, 12 and 16 weeks was 67.3%, 68.7%, 66.6%, 64.6%, 64.5%, 69.4%, 68.7%, 68% respectively. The effect of time lapse on detection of VRFs was not significant (p>0.05). Overall sensitivity, specificity and accuracy of CBCT for detection of VRFs were 74.3%, 62.2%, 67.2% respectively. Cone beam computed tomography is a valuable tool for detection of VRFs. Time lapse (four months) had no effect on detection of VRFs on CBCT scans.
Moradi, Mahmoud; Tajkhorshid, Emad
2014-07-01
Characterizing large-scale structural transitions in biomolecular systems poses major technical challenges to both experimental and computational approaches. On the computational side, efficient sampling of the configuration space along the transition pathway remains the most daunting challenge. Recognizing this issue, we introduce a knowledge-based computational approach toward describing large-scale conformational transitions using (i) nonequilibrium, driven simulations combined with work measurements and (ii) free energy calculations using empirically optimized biasing protocols. The first part is based on designing mechanistically relevant, system-specific reaction coordinates whose usefulness and applicability in inducing the transition of interest are examined using knowledge-based, qualitative assessments along with nonequilirbrium work measurements which provide an empirical framework for optimizing the biasing protocol. The second part employs the optimized biasing protocol resulting from the first part to initiate free energy calculations and characterize the transition quantitatively. Using a biasing protocol fine-tuned to a particular transition not only improves the accuracy of the resulting free energies but also speeds up the convergence. The efficiency of the sampling will be assessed by employing dimensionality reduction techniques to help detect possible flaws and provide potential improvements in the design of the biasing protocol. Structural transition of a membrane transporter will be used as an example to illustrate the workings of the proposed approach.
Porterfield, Amber; Engelbert, Kate; Coustasse, Alberto
2014-01-01
Electronic prescribing (e-prescribing) is an important part of the nation's push to enhance the safety and quality of the prescribing process. E-prescribing allows providers in the ambulatory care setting to send prescriptions electronically to the pharmacy and can be a stand-alone system or part of an integrated electronic health record system. The methodology for this study followed the basic principles of a systematic review. A total of 47 sources were referenced. Results of this research study suggest that e-prescribing reduces prescribing errors, increases efficiency, and helps to save on healthcare costs. Medication errors have been reduced to as little as a seventh of their previous level, and cost savings due to improved patient outcomes and decreased patient visits are estimated to be between $140 billion and $240 billion over 10 years for practices that implement e-prescribing. However, there have been significant barriers to implementation including cost, lack of provider support, patient privacy, system errors, and legal issues.
Efficient Universal Computing Architectures for Decoding Neural Activity
Rapoport, Benjamin I.; Turicchia, Lorenzo; Wattanapanitch, Woradorn; Davidson, Thomas J.; Sarpeshkar, Rahul
2012-01-01
The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain– machine interfaces (BMIs). Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain– machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than . We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA) implementation of this portion is consequently energy efficient
Kamomae, Takeshi; Monzen, Hajime; Nakayama, Shinichi; Mizote, Rika; Oonishi, Yuuichi; Kaneshige, Soichiro; Sakamoto, Takashi
2015-01-01
Movement of the target object during cone-beam computed tomography (CBCT) leads to motion blurring artifacts. The accuracy of manual image matching in image-guided radiotherapy depends on the image quality. We aimed to assess the accuracy of target position localization using free-breathing CBCT during stereotactic lung radiotherapy. The Vero4DRT linear accelerator device was used for the examinations. Reference point discrepancies between the MV X-ray beam and the CBCT system were calculated using a phantom device with a centrally mounted steel ball. The precision of manual image matching between the CBCT and the averaged intensity (AI) images restructured from four-dimensional CT (4DCT) was estimated with a respiratory motion phantom, as determined in evaluations by five independent operators. Reference point discrepancies between the MV X-ray beam and the CBCT image-guidance systems, categorized as left-right (LR), anterior-posterior (AP), and superior-inferior (SI), were 0.33 ± 0.09, 0.16 ± 0.07, and 0.05 ± 0.04 mm, respectively. The LR, AP, and SI values for residual errors from manual image matching were -0.03 ± 0.22, 0.07 ± 0.25, and -0.79 ± 0.68 mm, respectively. The accuracy of target position localization using the Vero4DRT system in our center was 1.07 ± 1.23 mm (2 SD). This study experimentally demonstrated the sufficient level of geometric accuracy using the free-breathing CBCT and the image-guidance system mounted on the Vero4DRT. However, the inter-observer variation and systematic localization error of image matching substantially affected the overall geometric accuracy. Therefore, when using the free-breathing CBCT images, careful consideration of image matching is especially important. PMID:25954809
Kamomae, Takeshi; Monzen, Hajime; Nakayama, Shinichi; Mizote, Rika; Oonishi, Yuuichi; Kaneshige, Soichiro; Sakamoto, Takashi
2015-01-01
Movement of the target object during cone-beam computed tomography (CBCT) leads to motion blurring artifacts. The accuracy of manual image matching in image-guided radiotherapy depends on the image quality. We aimed to assess the accuracy of target position localization using free-breathing CBCT during stereotactic lung radiotherapy. The Vero4DRT linear accelerator device was used for the examinations. Reference point discrepancies between the MV X-ray beam and the CBCT system were calculated using a phantom device with a centrally mounted steel ball. The precision of manual image matching between the CBCT and the averaged intensity (AI) images restructured from four-dimensional CT (4DCT) was estimated with a respiratory motion phantom, as determined in evaluations by five independent operators. Reference point discrepancies between the MV X-ray beam and the CBCT image-guidance systems, categorized as left-right (LR), anterior-posterior (AP), and superior-inferior (SI), were 0.33 ± 0.09, 0.16 ± 0.07, and 0.05 ± 0.04 mm, respectively. The LR, AP, and SI values for residual errors from manual image matching were -0.03 ± 0.22, 0.07 ± 0.25, and -0.79 ± 0.68 mm, respectively. The accuracy of target position localization using the Vero4DRT system in our center was 1.07 ± 1.23 mm (2 SD). This study experimentally demonstrated the sufficient level of geometric accuracy using the free-breathing CBCT and the image-guidance system mounted on the Vero4DRT. However, the inter-observer variation and systematic localization error of image matching substantially affected the overall geometric accuracy. Therefore, when using the free-breathing CBCT images, careful consideration of image matching is especially important.
NASA Technical Reports Server (NTRS)
Daigle, Matthew John; Goebel, Kai Frank
2010-01-01
Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.
NASA Astrophysics Data System (ADS)
Havu, Vile; Blum, Volker; Scheffler, Matthias
2007-03-01
Numeric atom-centered local orbitals (NAO) are efficient basis sets for all-electron electronic structure theory. The locality of NAO's can be exploited to render (in principle) all operations of the self-consistency cycle O(N). This is straightforward for 3D integrals using domain decomposition into spatially close subsets of integration points, enabling critical computational savings that are effective from ˜tens of atoms (no significant overhead for smaller systems) and make large systems (100s of atoms) computationally feasible. Using a new all-electron NAO-based code,^1 we investigate the quantitative impact of exploiting this locality on two distinct classes of systems: Large light-element molecules [Alanine-based polypeptide chains (Ala)n], and compact transition metal clusters. Strict NAO locality is achieved by imposing a cutoff potential with an onset radius rc, and exploited by appropriately shaped integration domains (subsets of integration points). Conventional tight rc<= 3å have no measurable accuracy impact in (Ala)n, but introduce inaccuracies of 20-30 meV/atom in Cun. The domain shape impacts the computational effort by only 10-20 % for reasonable rc. ^1 V. Blum, R. Gehrke, P. Havu, V. Havu, M. Scheffler, The FHI Ab Initio Molecular Simulations (aims) Project, Fritz-Haber-Institut, Berlin (2006).
The Efficiency of Various Computers and Optimizations in Performing Finite Element Computations
NASA Technical Reports Server (NTRS)
Marcus, Martin H.; Broduer, Steve (Technical Monitor)
2001-01-01
With the advent of computers with many processors, it becomes unclear how to best exploit this advantage. For example, matrices can be inverted by applying several processors to each vector operation, or one processor can be applied to each matrix. The former approach has diminishing returns beyond a handful of processors, but how many processors depends on the computer architecture. Applying one processor to each matrix is feasible with enough ram memory and scratch disk space, but the speed at which this is done is found to vary by a factor of three depending on how it is done. The cost of the computer must also be taken into account. A computer with many processors and fast interprocessor communication is much more expensive than the same computer and processors with slow interprocessor communication. Consequently, for problems that require several matrices to be inverted, the best speed per dollar for computers is found to be several small workstations that are networked together, such as in a Beowulf cluster. Since these machines typically have two processors per node, each matrix is most efficiently inverted with no more than two processors assigned to it.
Kim, Jinkoo; Hammoud, Rabih; Pradhan, Deepak; Zhong Hualiang; Jin, Ryan Y.; Movsas, Benjamin; Chetty, Indrin J.
2010-07-15
Purpose: To evaluate different similarity metrics (SM) using natural calcifications and observation-based measures to determine the most accurate prostate and seminal vesicle localization on daily cone-beam CT (CBCT) images. Methods and Materials: CBCT images of 29 patients were retrospectively analyzed; 14 patients with prostate calcifications (calcification data set) and 15 patients without calcifications (no-calcification data set). Three groups of test registrations were performed. Test 1: 70 CT/CBCT pairs from calcification dataset were registered using 17 SMs (6,580 registrations) and compared using the calcification mismatch error as an endpoint. Test 2: Using the four best SMs from Test 1, 75 CT/CBCT pairs in the no-calcification data set were registered (300 registrations). Accuracy of contour overlays was ranked visually. Test 3: For the best SM from Tests 1 and 2, accuracy was estimated using 356 CT/CBCT registrations. Additionally, target expansion margins were investigated for generating registration regions of interest. Results: Test 1-Incremental sign correlation (ISC), gradient correlation (GC), gradient difference (GD), and normalized cross correlation (NCC) showed the smallest errors ({mu} {+-} {sigma}: 1.6 {+-} 0.9 {approx} 2.9 {+-} 2.1 mm). Test 2-Two of the three reviewers ranked GC higher. Test 3-Using GC, 96% of registrations showed <3-mm error when calcifications were filtered. Errors were left/right: 0.1 {+-} 0.5mm, anterior/posterior: 0.8 {+-} 1.0mm, and superior/inferior: 0.5 {+-} 1.1 mm. The existence of calcifications increased the success rate to 97%. Expansion margins of 4-10 mm were equally successful. Conclusion: Gradient-based SMs were most accurate. Estimated error was found to be <3 mm (1.1 mm SD) in 96% of the registrations. Results suggest that the contour expansion margin should be no less than 4 mm.
Assessing the accuracy of many-body expansions for the computation of solvatochromic shifts
NASA Astrophysics Data System (ADS)
Mata, R. A.
2010-02-01
In this work, a computationally fast and simple scheme for calculating vertical excitation energies based on a many-body expansion is reviewed. It consists of a two-body expansion where each of the energy terms is computed with embedding in a point charge field representing the environment. The neglect of two-body polarisation energy terms is evaluated, as it allows for a compact energy expression, and avoids parameterisation of the solute. The solvatochromic shifts for the acetone and acrolein molecules are investigated, both in microsolvated clusters as well as in solution. It is found that the scheme is unable to correctly describe Rydberg states, but succeeds in closely reproducing the many-body effects involved in the π → π* excitation of acrolein in water.
NASA Astrophysics Data System (ADS)
Howard, J. Coleman; Enyard, Jordan D.; Tschumper, Gregory S.
2015-12-01
A wide range of density functional theory (DFT) methods (37 altogether), including pure, hybrid, range-separated hybrid, double-hybrid, and dispersion-corrected functionals, have been employed to compute the harmonic vibrational frequencies of eight small water clusters ranging in size from the dimer to four different isomers of the hexamer. These computed harmonic frequencies have been carefully compared to recently published benchmark values that are expected to be very close to the CCSD(T) complete basis set limit. Of the DFT methods examined here, ωB97 and ωB97X are the most consistently accurate, deviating from the reference values by less than 20 cm-1 on average and never more than 60 cm-1. The performance of double-hybrid methods including B2PLYP and mPW2-PLYP is only slightly better than more economical approaches, such as the M06-L pure functional and the M06-2X hybrid functional. Additionally, dispersion corrections offer very little improvement in computed frequencies.
Computation of stationary 3D halo currents in fusion devices with accuracy control
Bettini, Paolo; Specogna, Ruben
2014-09-15
This paper addresses the calculation of the resistive distribution of halo currents in three-dimensional structures of large magnetic confinement fusion machines. A Neumann electrokinetic problem is solved on a geometry so complicated that complementarity is used to monitor the discretization error. An irrotational electric field is obtained by a geometric formulation based on the electric scalar potential, whereas three geometric formulations are compared to obtain a solenoidal current density: a formulation based on the electric vector potential and two geometric formulations inspired from mixed and mixed-hybrid Finite Elements. The electric vector potential formulation is usually considered impractical since an enormous computing power is wasted by the topological pre-processing it requires. To solve this challenging problem, we present novel algorithms based on lazy cohomology generators that enable to save orders of magnitude computational time with respect to all other state-of-the-art solutions proposed in literature. Believing that our results are useful in other fields of scientific computing, the proposed algorithm is presented as a detailed pseudocode in such a way that it can be easily implemented.
Plant, Richard R
2016-03-01
There is an ongoing 'replication crisis' across the field of psychology in which researchers, funders, and members of the public are questioning the results of some scientific studies and the validity of the data they are based upon. However, few have considered that a growing proportion of research in modern psychology is conducted using a computer. Could it simply be that the hardware and software, or experiment generator, being used to run the experiment itself be a cause of millisecond timing error and subsequent replication failure? This article serves as a reminder that millisecond timing accuracy in psychology studies remains an important issue and that care needs to be taken to ensure that studies can be replicated on current computer hardware and software. PMID:25761394
Efficient Computer Network Anomaly Detection by Changepoint Detection Methods
NASA Astrophysics Data System (ADS)
Tartakovsky, Alexander G.; Polunchenko, Aleksey S.; Sokolov, Grigory
2013-02-01
We consider the problem of efficient on-line anomaly detection in computer network traffic. The problem is approached statistically, as that of sequential (quickest) changepoint detection. A multi-cyclic setting of quickest change detection is a natural fit for this problem. We propose a novel score-based multi-cyclic detection algorithm. The algorithm is based on the so-called Shiryaev-Roberts procedure. This procedure is as easy to employ in practice and as computationally inexpensive as the popular Cumulative Sum chart and the Exponentially Weighted Moving Average scheme. The likelihood ratio based Shiryaev-Roberts procedure has appealing optimality properties, particularly it is exactly optimal in a multi-cyclic setting geared to detect a change occurring at a far time horizon. It is therefore expected that an intrusion detection algorithm based on the Shiryaev-Roberts procedure will perform better than other detection schemes. This is confirmed experimentally for real traces. We also discuss the possibility of complementing our anomaly detection algorithm with a spectral-signature intrusion detection system with false alarm filtering and true attack confirmation capability, so as to obtain a synergistic system.
An efficient parallel algorithm for accelerating computational protein design
Zhou, Yichao; Xu, Wei; Donald, Bruce R.; Zeng, Jianyang
2014-01-01
Motivation: Structure-based computational protein design (SCPR) is an important topic in protein engineering. Under the assumption of a rigid backbone and a finite set of discrete conformations of side-chains, various methods have been proposed to address this problem. A popular method is to combine the dead-end elimination (DEE) and A* tree search algorithms, which provably finds the global minimum energy conformation (GMEC) solution. Results: In this article, we improve the efficiency of computing A* heuristic functions for protein design and propose a variant of A* algorithm in which the search process can be performed on a single GPU in a massively parallel fashion. In addition, we make some efforts to address the memory exceeding problem in A* search. As a result, our enhancements can achieve a significant speedup of the A*-based protein design algorithm by four orders of magnitude on large-scale test data through pre-computation and parallelization, while still maintaining an acceptable memory overhead. We also show that our parallel A* search algorithm could be successfully combined with iMinDEE, a state-of-the-art DEE criterion, for rotamer pruning to further improve SCPR with the consideration of continuous side-chain flexibility. Availability: Our software is available and distributed open-source under the GNU Lesser General License Version 2.1 (GNU, February 1999). The source code can be downloaded from http://www.cs.duke.edu/donaldlab/osprey.php or http://iiis.tsinghua.edu.cn/∼compbio/software.html. Contact: zengjy321@tsinghua.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931991
Bio++: efficient extensible libraries and tools for computational molecular evolution.
Guéguen, Laurent; Gaillard, Sylvain; Boussau, Bastien; Gouy, Manolo; Groussin, Mathieu; Rochette, Nicolas C; Bigot, Thomas; Fournier, David; Pouyet, Fanny; Cahais, Vincent; Bernard, Aurélien; Scornavacca, Céline; Nabholz, Benoît; Haudry, Annabelle; Dachary, Loïc; Galtier, Nicolas; Belkhir, Khalid; Dutheil, Julien Y
2013-08-01
Efficient algorithms and programs for the analysis of the ever-growing amount of biological sequence data are strongly needed in the genomics era. The pace at which new data and methodologies are generated calls for the use of pre-existing, optimized-yet extensible-code, typically distributed as libraries or packages. This motivated the Bio++ project, aiming at developing a set of C++ libraries for sequence analysis, phylogenetics, population genetics, and molecular evolution. The main attractiveness of Bio++ is the extensibility and reusability of its components through its object-oriented design, without compromising the computer-efficiency of the underlying methods. We present here the second major release of the libraries, which provides an extended set of classes and methods. These extensions notably provide built-in access to sequence databases and new data structures for handling and manipulating sequences from the omics era, such as multiple genome alignments and sequencing reads libraries. More complex models of sequence evolution, such as mixture models and generic n-tuples alphabets, are also included.
Neubauer, Jakob; Benndorf, Matthias; Reidelbach, Carolin; Krauß, Tobias; Lampert, Florian; Zajonc, Horst; Kotter, Elmar; Langer, Mathias; Fiebich, Martin; Goerke, Sebastian M.
2016-01-01
Purpose To compare the diagnostic accuracy of radiography, to radiography equivalent dose multidetector computed tomography (RED-MDCT) and to radiography equivalent dose cone beam computed tomography (RED-CBCT) for wrist fractures. Methods As study subjects we obtained 10 cadaveric human hands from body donors. Distal radius, distal ulna and carpal bones (n = 100) were artificially fractured in random order in a controlled experimental setting. We performed radiation dose equivalent radiography (settings as in standard clinical care), RED-MDCT in a 320 row MDCT with single shot mode and RED-CBCT in a device dedicated to musculoskeletal imaging. Three raters independently evaluated the resulting images for fractures and the level of confidence for each finding. Gold standard was evaluated by consensus reading of a high-dose MDCT. Results Pooled sensitivity was higher in RED-MDCT with 0.89 and RED-MDCT with 0.81 compared to radiography with 0.54 (P = < .004). No significant differences were detected concerning the modalities’ specificities (with values between P = .98). Raters' confidence was higher in RED-MDCT and RED-CBCT compared to radiography (P < .001). Conclusion The diagnostic accuracy of RED-MDCT and RED-CBCT for wrist fractures proved to be similar and in some parts even higher compared to radiography. Readers are more confident in their reporting with the cross sectional modalities. Dose equivalent cross sectional computed tomography of the wrist could replace plain radiography for fracture diagnosis in the long run. PMID:27788215
Karaiskos, Pantelis; Moutsatsos, Argyris; Pappas, Eleftherios; Georgiou, Evangelos; Roussakis, Arkadios; Torrens, Michael; Seimenis, Ioannis
2014-12-01
Purpose: To propose, verify, and implement a simple and efficient methodology for the improvement of total geometric accuracy in multiple brain metastases gamma knife (GK) radiation surgery. Methods and Materials: The proposed methodology exploits the directional dependence of magnetic resonance imaging (MRI)-related spatial distortions stemming from background field inhomogeneities, also known as sequence-dependent distortions, with respect to the read-gradient polarity during MRI acquisition. First, an extra MRI pulse sequence is acquired with the same imaging parameters as those used for routine patient imaging, aside from a reversal in the read-gradient polarity. Then, “average” image data are compounded from data acquired from the 2 MRI sequences and are used for treatment planning purposes. The method was applied and verified in a polymer gel phantom irradiated with multiple shots in an extended region of the GK stereotactic space. Its clinical impact in dose delivery accuracy was assessed in 15 patients with a total of 96 relatively small (<2 cm) metastases treated with GK radiation surgery. Results: Phantom study results showed that use of average MR images eliminates the effect of sequence-dependent distortions, leading to a total spatial uncertainty of less than 0.3 mm, attributed mainly to gradient nonlinearities. In brain metastases patients, non-eliminated sequence-dependent distortions lead to target localization uncertainties of up to 1.3 mm (mean: 0.51 ± 0.37 mm) with respect to the corresponding target locations in the “average” MRI series. Due to these uncertainties, a considerable underdosage (5%-32% of the prescription dose) was found in 33% of the studied targets. Conclusions: The proposed methodology is simple and straightforward in its implementation. Regarding multiple brain metastases applications, the suggested approach may substantially improve total GK dose delivery accuracy in smaller, outlying targets.
Textbook Multigrid Efficiency for Computational Fluid Dynamics Simulations
NASA Technical Reports Server (NTRS)
Brandt, Achi; Thomas, James L.; Diskin, Boris
2001-01-01
Considerable progress over the past thirty years has been made in the development of large-scale computational fluid dynamics (CFD) solvers for the Euler and Navier-Stokes equations. Computations are used routinely to design the cruise shapes of transport aircraft through complex-geometry simulations involving the solution of 25-100 million equations; in this arena the number of wind-tunnel tests for a new design has been substantially reduced. However, simulations of the entire flight envelope of the vehicle, including maximum lift, buffet onset, flutter, and control effectiveness have not been as successful in eliminating the reliance on wind-tunnel testing. These simulations involve unsteady flows with more separation and stronger shock waves than at cruise. The main reasons limiting further inroads of CFD into the design process are: (1) the reliability of turbulence models; and (2) the time and expense of the numerical simulation. Because of the prohibitive resolution requirements of direct simulations at high Reynolds numbers, transition and turbulence modeling is expected to remain an issue for the near term. The focus of this paper addresses the latter problem by attempting to attain optimal efficiencies in solving the governing equations. Typically current CFD codes based on the use of multigrid acceleration techniques and multistage Runge-Kutta time-stepping schemes are able to converge lift and drag values for cruise configurations within approximately 1000 residual evaluations. An optimally convergent method is defined as having textbook multigrid efficiency (TME), meaning the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in the discretized system of equations (residual equations). In this paper, a distributed relaxation approach to achieving TME for Reynolds-averaged Navier-Stokes (RNAS) equations are discussed along with the foundations that form the
Wagner, Arne; Wanschitz, Felix; Birkfellner, Wolfgang; Zauza, Konstantin; Klug, Clemens; Schicho, Kurt; Kainberger, Franz; Czerny, Christian; Bergmann, Helmar; Ewers, Rolf
2003-06-01
The objective of this study was to evaluate the feasibility and accuracy of a novel surgical computer-aided navigation system for the placement of endosseous implants in patients after ablative tumour surgery. Pre-operative planning was performed by developing a prosthetic concept and modifying the implant position according to surgical requirements after high-resolution computed tomography (HRCT) scans with VISIT, a surgical planning and navigation software developed at the Vienna General Hospital. The pre-operative plan was transferred to the patients intraoperatively using surgical navigation software and optical tracking technology. The patients were HRCT-scanned again to compare the position of the implants with the pre-operative plan on reformatted CT-slices after matching of the pre- and post-operative data sets using the mutual information-technique. A total of 32 implants was evaluated. The mean deviation was 1.1 mm (range: 0-3.5 mm). The mean angular deviation of the implants was 6.4 degrees (range: 0.4 degrees - 17.4 degrees, variance: 13.3 degrees ). The results demonstrate, that adequate accuracy in placing endosseous oral implants can be delivered to patients with most difficult implantologic situations.
Computational Design of an Unnatural Amino Acid Dependent Metalloprotein with Atomic Level Accuracy
Mills, Jeremy H.; Khare, Sagar D.; Bolduc, Jill M.; Forouhar, Farhad; Mulligan, Vikram Khipple; Lew, Scott; Seetharaman, Jayaraman; Tong, Liang; Stoddard, Barry L.; Baker, David
2013-01-01
Genetically encoded unnatural amino acids could facilitate the design of proteins and enzymes of novel function, but correctly specifying sites of incorporation, and the identities and orientations of surrounding residues represents a formidable challenge. Computational design methods have been used to identify optimal locations for functional sites in proteins and design the surrounding residues, but have not incorporated unnatural amino acids in this process. We extended the Rosetta design methodology to design metalloproteins in which the amino acid (2,2’-bipyridin-5yl)alanine (Bpy-Ala) is a primary ligand of a bound metal ion. Following initial results that indicated the importance of buttressing the Bpy-Ala amino acid, we designed a buried metal binding site with octahedral coordination geometry consisting of Bpy-Ala, two protein based metal ligands, and two metal bound water molecules. Experimental characterization revealed a Bpy-Ala mediated metalloprotein with the ability to bind divalent cations including Co2+, Zn2+, Fe2+, and Ni2+, with a Kd for Zn2+ of ~40 pM. X-ray crystallographic analysis of the designed protein shows only slight deviation from the computationally designed model. PMID:23924187
NASA Technical Reports Server (NTRS)
Cowings, Patricia S.; Naifeh, Karen; Thrasher, Chet
1988-01-01
This report contains the source code and documentation for a computer program used to process impedance cardiography data. The cardiodynamic measures derived from impedance cardiography are ventricular stroke column, cardiac output, cardiac index and Heather index. The program digitizes data collected from the Minnesota Impedance Cardiograph, Electrocardiography (ECG), and respiratory cycles and then stores these data on hard disk. It computes the cardiodynamic functions using interactive graphics and stores the means and standard deviations of each 15-sec data epoch on floppy disk. This software was designed on a Digital PRO380 microcomputer and used version 2.0 of P/OS, with (minimally) a 4-channel 16-bit analog/digital (A/D) converter. Applications software is written in FORTRAN 77, and uses Digital's Pro-Tool Kit Real Time Interface Library, CORE Graphic Library, and laboratory routines. Source code can be readily modified to accommodate alternative detection, A/D conversion and interactive graphics. The object code utilizing overlays and multitasking has a maximum of 50 Kbytes.
Assessing the accuracy of the isotropic periodic sum method through Madelung energy computation
NASA Astrophysics Data System (ADS)
Ojeda-May, Pedro; Pu, Jingzhi
2014-04-01
We tested the isotropic periodic sum (IPS) method for computing Madelung energies of ionic crystals. The performance of the method, both in its nonpolar (IPSn) and polar (IPSp) forms, was compared with that of the zero-charge and Wolf potentials [D. Wolf, P. Keblinski, S. R. Phillpot, and J. Eggebrecht, J. Chem. Phys. 110, 8254 (1999)]. The results show that the IPSn and IPSp methods converge the Madelung energy to its reference value with an average deviation of ˜10-4 and ˜10-7 energy units, respectively, for a cutoff range of 18-24a (a/2 being the nearest-neighbor ion separation). However, minor oscillations were detected for the IPS methods when deviations of the computed Madelung energies were plotted on a logarithmic scale as a function of the cutoff distance. To remove such oscillations, we introduced a modified IPSn potential in which both the local-region and long-range electrostatic terms are damped, in analogy to the Wolf potential. With the damped-IPSn potential, a smoother convergence was achieved. In addition, we observed a better agreement between the damped-IPSn and IPSp methods, which suggests that damping the IPSn potential is in effect similar to adding a screening potential in IPSp.
Alves, Gelio; Yu, Yi-Kuo
2011-01-01
Given the expanding availability of scientific data and tools to analyze them, combining different assessments of the same piece of information has become increasingly important for social, biological, and even physical sciences. This task demands, to begin with, a method-independent standard, such as the P-value, that can be used to assess the reliability of a piece of information. Good's formula and Fisher's method combine independent P-values with respectively unequal and equal weights. Both approaches may be regarded as limiting instances of a general case of combining P-values from m groups; P-values within each group are weighted equally, while weight varies by group. When some of the weights become nearly degenerate, as cautioned by Good, numeric instability occurs in computation of the combined P-values. We deal explicitly with this difficulty by deriving a controlled expansion, in powers of differences in inverse weights, that provides both accurate statistics and stable numerics. We illustrate the utility of this systematic approach with a few examples. In addition, we also provide here an alternative derivation for the probability distribution function of the general case and show how the analytic formula obtained reduces to both Good's and Fisher's methods as special cases. A C++ program, which computes the combined P-values with equal numerical stability regardless of whether weights are (nearly) degenerate or not, is available for download at our group website http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads/CoinedPValues.html.
Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang
2016-01-01
The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality.
Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang
2016-01-01
The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality. PMID:26934045
NASA Astrophysics Data System (ADS)
Chauhan, Swarup; Rühaak, Wolfram; Anbergen, Hauke; Kabdenov, Alen; Freise, Marcus; Wille, Thorsten; Sass, Ingo
2016-07-01
Performance and accuracy of machine learning techniques to segment rock grains, matrix and pore voxels from a 3-D volume of X-ray tomographic (XCT) grayscale rock images was evaluated. The segmentation and classification capability of unsupervised (k-means, fuzzy c-means, self-organized maps), supervised (artificial neural networks, least-squares support vector machines) and ensemble classifiers (bragging and boosting) were tested using XCT images of andesite volcanic rock, Berea sandstone, Rotliegend sandstone and a synthetic sample. The averaged porosity obtained for andesite (15.8 ± 2.5 %), Berea sandstone (16.3 ± 2.6 %), Rotliegend sandstone (13.4 ± 7.4 %) and the synthetic sample (48.3 ± 13.3 %) is in very good agreement with the respective laboratory measurement data and varies by a factor of 0.2. The k-means algorithm is the fastest of all machine learning algorithms, whereas a least-squares support vector machine is the most computationally expensive. Metrics entropy, purity, mean square root error, receiver operational characteristic curve and 10 K-fold cross-validation were used to determine the accuracy of unsupervised, supervised and ensemble classifier techniques. In general, the accuracy was found to be largely affected by the feature vector selection scheme. As it is always a trade-off between performance and accuracy, it is difficult to isolate one particular machine learning algorithm which is best suited for the complex phase segmentation problem. Therefore, our investigation provides parameters that can help in selecting the appropriate machine learning techniques for phase segmentation.
On Accuracy Order of Fourier Coefficients Computation for Periodic Signal Processing Models
NASA Astrophysics Data System (ADS)
Korytov, I. V.; Golosov, S. E.
2016-08-01
The article is devoted to construction piecewise constant functions for modelling periodic signal. The aim of the paper is to suggest a way to avoid discontinuity at points where waveform values are obtained. One solution is to introduce shifted step function whose middle points within its partial intervals coincide with points of observation. This means that large oscillations of Fourier partial sums move to new jump discontinuities where waveform values are not obtained. Furthermore, any step function chosen to model periodic continuous waveform determines a way to calculate Fourier coefficients. In this case, the technique is certainly a weighted rectangular quadrature rule. Here, the weight is either unit or trigonometric. Another effect of the solution consists in following. The shifted function leads to application midpoint quadrature rules for computing Fourier coefficients. As a result the formula for zero coefficient transforms into trapezoid rule. In the same time, the formulas for other coefficients remain of rectangular type.
Computational Design of Self-Assembling Protein Nanomaterials with Atomic Level Accuracy
King, Neil P.; Sheffler, William; Sawaya, Michael R.; Vollmar, Breanna S.; Sumida, John P.; André, Ingemar; Gonen, Tamir; Yeates, Todd O.; Baker, David
2015-09-17
We describe a general computational method for designing proteins that self-assemble to a desired symmetric architecture. Protein building blocks are docked together symmetrically to identify complementary packing arrangements, and low-energy protein-protein interfaces are then designed between the building blocks in order to drive self-assembly. We used trimeric protein building blocks to design a 24-subunit, 13-nm diameter complex with octahedral symmetry and a 12-subunit, 11-nm diameter complex with tetrahedral symmetry. The designed proteins assembled to the desired oligomeric states in solution, and the crystal structures of the complexes revealed that the resulting materials closely match the design models. The method can be used to design a wide variety of self-assembling protein nanomaterials.
Singh, Nidhi; Warshel, Arieh
2010-01-01
Calculating the absolute binding free energies is a challenging task. Reliable estimates of binding free energies should provide a guide for rational drug design. It should also provide us with deeper understanding of the correlation between protein structure and its function. Further applications may include identifying novel molecular scaffolds and optimizing lead compounds in computer-aided drug design. Available options to evaluate the absolute binding free energies range from the rigorous but expensive free energy perturbation to the microscopic Linear Response Approximation (LRA/β version) and its variants including the Linear Interaction Energy (LIE) to the more approximated and considerably faster scaled Protein Dipoles Langevin Dipoles (PDLD/S-LRA version), as well as the less rigorous Molecular Mechanics Poisson–Boltzmann/Surface Area (MM/PBSA) and Generalized Born/Surface Area (MM/GBSA) to the less accurate scoring functions. There is a need for an assessment of the performance of different approaches in terms of computer time and reliability. We present a comparative study of the LRA/β, the LIE, the PDLD/S-LRA/β and the more widely used MM/PBSA and assess their abilities to estimate the absolute binding energies. The LRA and LIE methods perform reasonably well but require specialized parameterization for the non-electrostatic term. On the average, the PDLD/S-LRA/β performs effectively. Our assessment of the MM/PBSA is less optimistic. This approach appears to provide erroneous estimates of the absolute binding energies due to its incorrect entropies and the problematic treatment of electrostatic energies. Overall, the PDLD/S-LRA/β appears to offer an appealing option for the final stages of massive screening approaches. PMID:20186976
Lu, D; Akanno, E C; Crowley, J J; Schenkel, F; Li, H; De Pauw, M; Moore, S S; Wang, Z; Li, C; Stothard, P; Plastow, G; Miller, S P; Basarab, J A
2016-04-01
The accuracy of genomic predictions can be used to assess the utility of dense marker genotypes for genetic improvement of beef efficiency traits. This study was designed to test the impact of genomic distance between training and validation populations, training population size, statistical methods, and density of genetic markers on prediction accuracy for feed efficiency traits in multibreed and crossbred beef cattle. A total of 6,794 beef cattle data collated from various projects and research herds across Canada were used. Illumina BovineSNP50 (50K) and imputed Axiom Genome-Wide BOS 1 Array (HD) genotypes were available for all animals. The traits studied were DMI, ADG, and residual feed intake (RFI). Four validation groups of 150 animals each, including Angus (AN), Charolais (CH), Angus-Hereford crosses (ANHH), and a Charolais-based composite (TX) were created by considering the genomic distance between pairs of individuals in the validation groups. Each validation group had 7 corresponding training groups of increasing sizes ( = 1,000, 1,999, 2,999, 3,999, 4,999, 5,998, and 6,644), which also represent increasing average genomic distance between pairs of individuals in the training and validations groups. Prediction of genomic estimated breeding values (GEBV) was performed using genomic best linear unbiased prediction (GBLUP) and Bayesian method C (BayesC). The accuracy of genomic predictions was defined as the Pearson's correlation between adjusted phenotype and GEBV (), unless otherwise stated. Using 50K genotypes, the highest average achieved in purebreds (AN, CH) was 0.41 for DMI, 0.34 for ADG, and 0.35 for RFI, whereas in crossbreds (ANHH, TX) it was 0.38 for DMI, 0.21 for ADG, and 0.25 for RFI. Similarly, when imputed HD genotypes were applied in purebreds (AN, CH), the highest average was 0.14 for DMI, 0.15 for ADG, and 0.14 for RFI, whereas in crossbreds (ANHH, TX) it was 0.38 for DMI, 0.22 for ADG, and 0.24 for RFI. The of GBLUP predictions were
Foo Kune, Denis; Mahadevan, Karthikeyan
2011-01-25
A recursive verification protocol to reduce the time variance due to delays in the network by putting the subject node at most one hop from the verifier node provides for an efficient manner to test wireless sensor nodes. Since the software signatures are time based, recursive testing will give a much cleaner signal for positive verification of the software running on any one node in the sensor network. In this protocol, the main verifier checks its neighbor, who in turn checks its neighbor, and continuing this process until all nodes have been verified. This ensures minimum time delays for the software verification. Should a node fail the test, the software verification downstream is halted until an alternative path (one not including the failed node) is found. Utilizing techniques well known in the art, having a node tested twice, or not at all, can be avoided.
Devereux, Mike; Raghunathan, Shampa; Fedorov, Dmitri G; Meuwly, Markus
2014-10-14
A truncated multipole expansion can be re-expressed exactly using an appropriate arrangement of point charges. This means that groups of point charges that are shifted away from nuclear coordinates can be used to achieve accurate electrostatics for molecular systems. We introduce a multipolar electrostatic model formulated in this way for use in computationally efficient multipolar molecular dynamics simulations with well-defined forces and energy conservation in NVE (constant number-volume-energy) simulations. A framework is introduced to distribute torques arising from multipole moments throughout a molecule, and a refined fitting approach is suggested to obtain atomic multipole moments that are optimized for accuracy and numerical stability in a force field context. The formulation of the charge model is outlined as it has been implemented into CHARMM, with application to test systems involving H2O and chlorobenzene. As well as ease of implementation and computational efficiency, the approach can be used to provide snapshots for multipolar QM/MM calculations in QM/MM-MD studies and easily combined with a standard point-charge force field to allow mixed multipolar/point charge simulations of large systems. PMID:26588121
A computationally efficient particle-simulation method suited to vector-computer architectures
McDonald, J.D.
1990-01-01
Recent interest in a National Aero-Space Plane (NASP) and various Aero-assisted Space Transfer Vehicles (ASTVs) presents the need for a greater understanding of high-speed rarefied flight conditions. Particle simulation techniques such as the Direct Simulation Monte Carlo (DSMC) method are well suited to such problems, but the high cost of computation limits the application of the methods to two-dimensional or very simple three-dimensional problems. This research re-examines the algorithmic structure of existing particle simulation methods and re-structures them to allow efficient implementation on vector-oriented supercomputers. A brief overview of the DSMC method and the Cray-2 vector computer architecture are provided, and the elements of the DSMC method that inhibit substantial vectorization are identified. One such element is the collision selection algorithm. A complete reformulation of underlying kinetic theory shows that this may be efficiently vectorized for general gas mixtures. The mechanics of collisions are vectorizable in the DSMC method, but several optimizations are suggested that greatly enhance performance. Also this thesis proposes a new mechanism for the exchange of energy between vibration and other energy modes. The developed scheme makes use of quantized vibrational states and is used in place of the Borgnakke-Larsen model. Finally, a simplified representation of physical space and boundary conditions is utilized to further reduce the computational cost of the developed method. Comparison to solutions obtained from the DSMC method for the relaxation of internal energy modes in a homogeneous gas, as well as single and multiple specie shock wave profiles, are presented. Additionally, a large scale simulation of the flow about the proposed Aeroassisted Flight Experiment (AFE) vehicle is included as an example of the new computational capability of the developed particle simulation method.
Computational design of an unnatural amino acid dependent metalloprotein with atomic level accuracy.
Mills, Jeremy H; Khare, Sagar D; Bolduc, Jill M; Forouhar, Farhad; Mulligan, Vikram Khipple; Lew, Scott; Seetharaman, Jayaraman; Tong, Liang; Stoddard, Barry L; Baker, David
2013-09-11
Genetically encoded unnatural amino acids could facilitate the design of proteins and enzymes of novel function, but correctly specifying sites of incorporation and the identities and orientations of surrounding residues represents a formidable challenge. Computational design methods have been used to identify optimal locations for functional sites in proteins and design the surrounding residues but have not incorporated unnatural amino acids in this process. We extended the Rosetta design methodology to design metalloproteins in which the amino acid (2,2'-bipyridin-5yl)alanine (Bpy-Ala) is a primary ligand of a bound metal ion. Following initial results that indicated the importance of buttressing the Bpy-Ala amino acid, we designed a buried metal binding site with octahedral coordination geometry consisting of Bpy-Ala, two protein-based metal ligands, and two metal-bound water molecules. Experimental characterization revealed a Bpy-Ala-mediated metalloprotein with the ability to bind divalent cations including Co(2+), Zn(2+), Fe(2+), and Ni(2+), with a Kd for Zn(2+) of ∼40 pM. X-ray crystal structures of the designed protein bound to Co(2+) and Ni(2+) have RMSDs to the design model of 0.9 and 1.0 Å respectively over all atoms in the binding site.
Ybinger, Thomas; Kumpan, W; Hoffart, H E; Muschalik, B; Bullmann, W; Zweymüller, K
2007-09-01
The postoperative position of the acetabular component is key for the outcome of total hip arthroplasty. Various aids have been developed to support the surgeon during implant placement. In a prospective study involving 4 centers, the computer-recorded cup alignment of 37 hip systems at the end of navigation-assisted surgery was compared with the cup angles measured on postoperative computerized tomograms. This comparison showed an average difference of 3.5 degrees (SD, 4.4 degrees ) for inclination and 6.5 degrees (SD, 7.3 degrees ) for anteversion angles. The differences in inclination correlated with the thickness of the soft tissue overlying the anterior superior iliac spine (r = 0.44; P = .007), whereas the differences in anteversion showed a correlation with the thickness of the soft tissue overlying the pubic tubercles (r = 0.52; P = .001). In centers experienced in the use of navigational tools, deviations were smaller than in units with little experience in their use. PMID:17826270
Validating the Accuracy of Reaction Time Assessment on Computer-Based Tablet Devices.
Schatz, Philip; Ybarra, Vincent; Leitner, Donald
2015-08-01
Computer-based assessment has evolved to tablet-based devices. Despite the availability of tablets and "apps," there is limited research validating their use. We documented timing delays between stimulus presentation and (simulated) touch response on iOS devices (3rd- and 4th-generation Apple iPads) and Android devices (Kindle Fire, Google Nexus, Samsung Galaxy) at response intervals of 100, 250, 500, and 1,000 milliseconds (ms). Results showed significantly greater timing error on Google Nexus and Samsung tablets (81-97 ms), than Kindle Fire and Apple iPads (27-33 ms). Within Apple devices, iOS 7 obtained significantly lower timing error than iOS 6. Simple reaction time (RT) trials (250 ms) on tablet devices represent 12% to 40% error (30-100 ms), depending on the device, which decreases considerably for choice RT trials (3-5% error at 1,000 ms). Results raise implications for using the same device for serial clinical assessment of RT using tablets, as well as the need for calibration of software and hardware. PMID:25612627
Validating the Accuracy of Reaction Time Assessment on Computer-Based Tablet Devices.
Schatz, Philip; Ybarra, Vincent; Leitner, Donald
2015-08-01
Computer-based assessment has evolved to tablet-based devices. Despite the availability of tablets and "apps," there is limited research validating their use. We documented timing delays between stimulus presentation and (simulated) touch response on iOS devices (3rd- and 4th-generation Apple iPads) and Android devices (Kindle Fire, Google Nexus, Samsung Galaxy) at response intervals of 100, 250, 500, and 1,000 milliseconds (ms). Results showed significantly greater timing error on Google Nexus and Samsung tablets (81-97 ms), than Kindle Fire and Apple iPads (27-33 ms). Within Apple devices, iOS 7 obtained significantly lower timing error than iOS 6. Simple reaction time (RT) trials (250 ms) on tablet devices represent 12% to 40% error (30-100 ms), depending on the device, which decreases considerably for choice RT trials (3-5% error at 1,000 ms). Results raise implications for using the same device for serial clinical assessment of RT using tablets, as well as the need for calibration of software and hardware.
NASA Astrophysics Data System (ADS)
Razavi, S.; Anderson, D.; Martin, P.; MacMillan, G.; Tolson, B.; Gabriel, C.; Zhang, B.
2012-12-01
Many sophisticated groundwater models tend to be computationally intensive as they rigorously represent detailed scientific knowledge about the groundwater systems. Calibration (model inversion), which is a vital step of groundwater model development, can require hundreds or thousands of model evaluations (runs) for different sets of parameters and as such demand prohibitively large computational time and resources. One common strategy to circumvent this computational burden is surrogate modelling which is concerned with developing and utilizing fast-to-run surrogates of the original computationally intensive models (also called fine models). Surrogates can be either based on statistical and data-driven models such as kriging and neural networks or simplified physically-based models with lower fidelity to the original system (also called coarse models). Fidelity in this context refers to the degree of the realism of a simulation model. This research initially investigates different strategies for developing lower-fidelity surrogates of a fine groundwater model and their combinations. These strategies include coarsening the fine model, relaxing the numerical convergence criteria, and simplifying the model geological conceptualisation. Trade-offs between model efficiency and fidelity (accuracy) are of special interest. A methodological framework is developed for coordinating the original fine model with its lower-fidelity surrogates with the objective of efficiently calibrating the parameters of the original model. This framework is capable of mapping the original model parameters to the corresponding surrogate model parameters and also mapping the surrogate model response for the given parameters to the original model response. This framework is general in that it can be used with different optimization and/or uncertainty analysis techniques available for groundwater model calibration and parameter/predictive uncertainty assessment. A real-world computationally
Impact of Computer-Aided Detection Systems on Radiologist Accuracy With Digital Mammography
Cole, Elodia B.; Zhang, Zheng; Marques, Helga S.; Hendrick, R. Edward; Yaffe, Martin J.; Pisano, Etta D.
2014-01-01
OBJECTIVE The purpose of this study was to assess the impact of computer-aided detection (CAD) systems on the performance of radiologists with digital mammograms acquired during the Digital Mammographic Imaging Screening Trial (DMIST). MATERIALS AND METHODS Only those DMIST cases with proven cancer status by biopsy or 1-year follow-up that had available digital images were included in this multireader, multicase ROC study. Two commercially available CAD systems for digital mammography were used: iCAD SecondLook, version 1.4; and R2 ImageChecker Cenova, version 1.0. Fourteen radiologists interpreted, without and with CAD, a set of 300 cases (150 cancer, 150 benign or normal) on the iCAD SecondLook system, and 15 radiologists interpreted a different set of 300 cases (150 cancer, 150 benign or normal) on the R2 ImageChecker Cenova system. RESULTS The average AUC was 0.71 (95% CI, 0.66–0.76) without and 0.72 (95% CI, 0.67–0.77) with the iCAD system (p = 0.07). Similarly, the average AUC was 0.71 (95% CI, 0.66–0.76) without and 0.72 (95% CI 0.67–0.77) with the R2 system (p = 0.08). Sensitivity and specificity differences without and with CAD for both systems also were not significant. CONCLUSION Radiologists in our studies rarely changed their diagnostic decisions after the addition of CAD. The application of CAD had no statistically significant effect on radiologist AUC, sensitivity, or specificity performance with digital mammograms from DMIST. PMID:25247960
Kohorst, Philipp; Brinkmann, Henrike; Li, Jiang; Borchers, Lothar; Stiesch, Meike
2009-06-01
Besides load-bearing capacity, marginal accuracy is a further crucial factor influencing the clinical long-term reliability of fixed dental prostheses (FDPs). The aim of this in vitro study was to evaluate the marginal fit of four-unit zirconia bridge frameworks fabricated using four different computer-aided design (CAD)/computer-aided manufacturing (CAM) systems. Ten frameworks were manufactured using each fabricating system. Three systems (inLab, Everest, Cercon) processed white-stage zirconia blanks, which had to be sintered to final density after milling, while with one system (Digident) restorations were directly milled from a fully sintered material. After manufacturing, horizontal and vertical marginal discrepancies, as well as the absolute marginal discrepancy, were determined by means of a replica technique. The absolute marginal discrepancy, which is considered to be the most suitable parameter reflecting restorations' misfit in the marginal area, had a mean value of 58 mum for the Digident system. By contrast, mean absolute marginal discrepancies for the three other systems, processing presintered blanks, differed significantly and ranged between 183 and 206 mum. Within the limitations of this study, it could be concluded that the marginal fit of zirconia FDPs is significantly dependent on the CAD/CAM system used, with restorations processed of fully sintered zirconia showing better fitting accuracy. PMID:19583762
NASA Astrophysics Data System (ADS)
Hu, Quan; Jia, Yinghong; Xu, Shijie
2012-12-01
This paper presents a new formulation for automatic generation of the motion equations of arbitrary multibody systems. The method is applicable to systems with rigid and flexible bodies. The number of degrees of freedom (DOF) of the bodies' interconnection joints is allowed to vary from 0 to 6. It permits the system to have tree topology or closed structural loops. The formulation is based on Kane's method. Each rigid or flexible body's contribution to the system generalized inertia force is expressed in a similar manner; therefore, it makes the formulation quite amenable to computer solution. All the recursive kinematic relations are developed, and efficient motion variables describing the elastic motion and the hinge motion are adopted to improve modeling efficiency. Motion constraints are handled by the new form of Kane's equation. The final mathematical model has the same dimension with the generalized speeds of the system and involves no Lagrange multipliers, so it is useful for control system design. A sample example is given to interpret several concepts it involves, while the numerical simulations are shown to validate the algorithm's accuracy and efficiency.
Nakazawa, Hisato; Mori, Yoshimasa; Komori, Masataka; Shibamoto, Yuta; Tsugawa, Takahiko; Kobayashi, Tatsuya; Hashizume, Chisa
2014-01-01
The latest version of Leksell GammaPlan (LGP) is equipped with Digital Imaging and Communication in Medicine (DICOM) image-processing functions including image co-registration. Diagnostic magnetic resonance imaging (MRI) taken prior to Gamma Knife treatment is available for virtual treatment pre-planning. On the treatment day, actual dose planning is completed on stereotactic MRI or computed tomography (CT) (with a frame) after co-registration with the diagnostic MRI and in association with the virtual dose distributions. This study assesses the accuracy of image co-registration in a phantom study and evaluates its usefulness in clinical cases. Images of three kinds of phantoms and 11 patients are evaluated. In the phantom study, co-registration errors of the 3D coordinates were measured in overall stereotactic space and compared between stereotactic CT and diagnostic CT, stereotactic MRI and diagnostic MRI, stereotactic CT and diagnostic MRI, and stereotactic MRI and diagnostic MRI co-registered with stereotactic CT. In the clinical study, target contours were compared between stereotactic MRI and diagnostic MRI co-registered with stereotactic CT. The mean errors of coordinates between images were < 1 mm in all measurement areas in both the phantom and clinical patient studies. The co-registration function implemented in LGP has sufficient geometrical accuracy to assure appropriate dose planning in clinical use. PMID:24781505
Zuehlsdorff, T J; Hine, N D M; Payne, M C; Haynes, P D
2015-11-28
We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment. PMID:26627950
NASA Astrophysics Data System (ADS)
Zuehlsdorff, T. J.; Hine, N. D. M.; Payne, M. C.; Haynes, P. D.
2015-11-01
We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.
Zuehlsdorff, T. J. Payne, M. C.; Hine, N. D. M.; Haynes, P. D.
2015-11-28
We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.
The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency
ERIC Educational Resources Information Center
Oder, Karl; Pittman, Stephanie
2015-01-01
Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…
Matsuda, Takuya; Kido, Teruhito; Itoh, Toshihide; Saeki, Hideyuki; Shigemi, Susumu; Watanabe, Kouki; Kido, Tomoyuki; Aono, Shoji; Yamamoto, Masaya; Matsuda, Takeshi; Mochizuki, Teruhito
2015-12-01
We evaluated the image quality and diagnostic performance of late iodine enhancement (LIE) in dual-source computed tomography (DSCT) with low kilo-voltage peak (kVp) images and a denoise filter for the detection of acute myocardial infarction (AMI) in comparison with late gadolinium enhancement (LGE) magnetic resonance imaging (MRI). The Hospital Ethics Committee approved the study protocol. Before discharge, 19 patients who received percutaneous coronary intervention after AMI underwent DSCT and 1.5 T MRI. Immediately after coronary computed tomography (CT) angiography, contrast medium was administered at a slow injection rate. LIE-CT scans were acquired via dual-energy CT and reconstructed as 100-, 140-kVp, and mixed images. An iterative three-dimensional edge-preserved smoothing filter was applied to the 100-kVp images to obtain denoised 100-kVp images. The mixed, 140-kVp, 100-kVp, and denoised 100-kVp images were assessed using contrast-to-noise ratio (CNR), and their diagnostic performance in comparison with MRI and infarcted volumes were evaluated. Three hundred four segments of 19 patients were evaluated. Fifty-three segments showed LGE in MRI. The median CNR of the mixed, 140-, 100-kVp and denoised 100-kVp images was 3.49, 1.21, 3.57, and 6.08, respectively. The median CNR was significantly higher in the denoised 100-kVp images than in the other three images (P < 0.05). The denoised 100-kVp images showed the highest diagnostic accuracy and sensitivity. The percentage of myocardium in the four CT image types was significantly correlated with the respective MRI findings. The use of a denoise filter with a low-kVp image can improve CNR, sensitivity, and accuracy in LIE-CT.
Matsuda, Takuya; Kido, Teruhito; Itoh, Toshihide; Saeki, Hideyuki; Shigemi, Susumu; Watanabe, Kouki; Kido, Tomoyuki; Aono, Shoji; Yamamoto, Masaya; Matsuda, Takeshi; Mochizuki, Teruhito
2015-12-01
We evaluated the image quality and diagnostic performance of late iodine enhancement (LIE) in dual-source computed tomography (DSCT) with low kilo-voltage peak (kVp) images and a denoise filter for the detection of acute myocardial infarction (AMI) in comparison with late gadolinium enhancement (LGE) magnetic resonance imaging (MRI). The Hospital Ethics Committee approved the study protocol. Before discharge, 19 patients who received percutaneous coronary intervention after AMI underwent DSCT and 1.5 T MRI. Immediately after coronary computed tomography (CT) angiography, contrast medium was administered at a slow injection rate. LIE-CT scans were acquired via dual-energy CT and reconstructed as 100-, 140-kVp, and mixed images. An iterative three-dimensional edge-preserved smoothing filter was applied to the 100-kVp images to obtain denoised 100-kVp images. The mixed, 140-kVp, 100-kVp, and denoised 100-kVp images were assessed using contrast-to-noise ratio (CNR), and their diagnostic performance in comparison with MRI and infarcted volumes were evaluated. Three hundred four segments of 19 patients were evaluated. Fifty-three segments showed LGE in MRI. The median CNR of the mixed, 140-, 100-kVp and denoised 100-kVp images was 3.49, 1.21, 3.57, and 6.08, respectively. The median CNR was significantly higher in the denoised 100-kVp images than in the other three images (P < 0.05). The denoised 100-kVp images showed the highest diagnostic accuracy and sensitivity. The percentage of myocardium in the four CT image types was significantly correlated with the respective MRI findings. The use of a denoise filter with a low-kVp image can improve CNR, sensitivity, and accuracy in LIE-CT. PMID:26202159
2016-01-01
An important challenge in the simulation of biomolecular systems is a quantitative description of the protonation and deprotonation process of amino acid residues. Despite the seeming simplicity of adding or removing a positively charged hydrogen nucleus, simulating the actual protonation/deprotonation process is inherently difficult. It requires both the explicit treatment of the excess proton, including its charge defect delocalization and Grotthuss shuttling through inhomogeneous moieties (water and amino residues), and extensive sampling of coupled condensed phase motions. In a recent paper (J. Chem. Theory Comput.2014, 10, 2729−273725061442), a multiscale approach was developed to map high-level quantum mechanics/molecular mechanics (QM/MM) data into a multiscale reactive molecular dynamics (MS-RMD) model in order to describe amino acid deprotonation in bulk water. In this article, we extend the fitting approach (called FitRMD) to create MS-RMD models for ionizable amino acids within proteins. The resulting models are shown to faithfully reproduce the free energy profiles of the reference QM/MM Hamiltonian for PT inside an example protein, the ClC-ec1 H+/Cl– antiporter. Moreover, we show that the resulting MS-RMD models are computationally efficient enough to then characterize more complex 2-dimensional free energy surfaces due to slow degrees of freedom such as water hydration of internal protein cavities that can be inherently coupled to the excess proton charge translocation. The FitRMD method is thus shown to be an effective way to map ab initio level accuracy into a much more computationally efficient reactive MD method in order to explicitly simulate and quantitatively describe amino acid protonation/deprotonation in proteins. PMID:26734942
ERIC Educational Resources Information Center
Henney, Maribeth
Two related studies were conducted to determine whether students read all-capital text and mixed text displayed on a computer screen with the same speed and accuracy. Seventy-seven college students read M. A. Tinker's "Basic Reading Rate Test" displayed on a PLATO computer screen. One treatment consisted of paragraphs in all-capital type followed…
Building Efficient Wireless Infrastructures for Pervasive Computing Environments
ERIC Educational Resources Information Center
Sheng, Bo
2010-01-01
Pervasive computing is an emerging concept that thoroughly brings computing devices and the consequent technology into people's daily life and activities. Most of these computing devices are very small, sometimes even "invisible", and often embedded into the objects surrounding people. In addition, these devices usually are not isolated, but…
NASA Astrophysics Data System (ADS)
Hu, Baoxin; Li, Jili; Jing, Linhai; Judah, Aaron
2014-02-01
Canopy height model (CHM) derived from LiDAR (Light Detection And Ranging) data has been commonly used to generate segments of individual tree crowns for forest inventory and sustainable management. However, branches, tree crowns, and tree clusters usually have similar shapes and overlapping sizes, which cause current individual tree crown delineation methods to work less effectively on closed canopy, deciduous or mixedwood forests. In addition, the potential of 3-dimentional (3-D) LiDAR data is not fully realized by CHM-oriented methods. In this study, a framework was proposed to take advantage of the simplicity of a CHM-oriented method, detailed vertical structures of tree crowns represented in high-density LiDAR data, and any prior knowledge of tree crowns. The efficiency and accuracy of ITC delineation can be improved. This framework consists of five steps: (1) determination of dominant crown sizes; (2) generation of initial tree segments using a multi-scale segmentation method; (3) identification of “problematic” segments; (4) determination of the number of trees based on the 3-D LiDAR points in each of the identified segments; and (5) refinement of the “problematic” segments by splitting and merging operations. The proposed framework was efficient, since the detailed examination of 3-D LiDAR points was not applied to all initial segments, but only to those needed further evaluations based on prior knowledge. It was also demonstrated to be effective based on an experiment on natural forests in Ontario, Canada. The proposed framework and specific methods yielded crown maps having a good consistency with manual and visual interpretation. The automated method correctly delineated about 74% and 72% of the tree crowns in two plots with mixedwood and deciduous trees, respectively.
Rao Min; Yang Wensha; Chen Fan; Sheng Ke; Ye Jinsong; Mehta, Vivek; Shepard, David; Cao Daliang
2010-03-15
Purpose: Helical tomotherapy (HT) and volumetric modulated arc therapy (VMAT) are arc-based approaches to IMRT delivery. The objective of this study is to compare VMAT to both HT and fixed field IMRT in terms of plan quality, delivery efficiency, and accuracy. Methods: Eighteen cases including six prostate, six head-and-neck, and six lung cases were selected for this study. IMRT plans were developed using direct machine parameter optimization in the Pinnacle{sup 3} treatment planning system. HT plans were developed using a Hi-Art II planning station. VMAT plans were generated using both the Pinnacle{sup 3} SmartArc IMRT module and a home-grown arc sequencing algorithm. VMAT and HT plans were delivered using Elekta's PreciseBeam VMAT linac control system (Elekta AB, Stockholm, Sweden) and a TomoTherapy Hi-Art II system (TomoTherapy Inc., Madison, WI), respectively. Treatment plan quality assurance (QA) for VMAT was performed using the IBA MatriXX system while an ion chamber and films were used for HT plan QA. Results: The results demonstrate that both VMAT and HT are capable of providing more uniform target doses and improved normal tissue sparing as compared with fixed field IMRT. In terms of delivery efficiency, VMAT plan deliveries on average took 2.2 min for prostate and lung cases and 4.6 min for head-and-neck cases. These values increased to 4.7 and 7.0 min for HT plans. Conclusions: Both VMAT and HT plans can be delivered accurately based on their own QA standards. Overall, VMAT was able to provide approximately a 40% reduction in treatment time while maintaining comparable plan quality to that of HT.
Tsai, Tai-Hsin; Wu, Dong-Syuan; Su, Yu-Feng; Wu, Chieh-Hsin; Lin, Chih-Lung
2016-01-01
Abstract This purpose of this retrospective study is validation of an intraoperative robotic grading classification system for assessing the accuracy of Kirschner-wire (K-wire) placements with the postoperative computed tomography (CT)-base classification system for assessing the accuracy of pedicle screw placements. We conducted a retrospective review of prospectively collected data from 35 consecutive patients who underwent 176 robotic assisted pedicle screws instrumentation at Kaohsiung Medical University Hospital from September 2014 to November 2015. During the operation, we used a robotic grading classification system for verifying the intraoperative accuracy of K-wire placements. Three months after surgery, we used the common CT-base classification system to assess the postoperative accuracy of pedicle screw placements. The distributions of accuracy between the intraoperative robot-assisted and various postoperative CT-based classification systems were compared using kappa statistics of agreement. The intraoperative accuracies of K-wire placements before and after repositioning were classified as excellent (131/176, 74.4% and 133/176, 75.6%, respectively), satisfactory (36/176, 20.5% and 41/176, 23.3%, respectively), and malpositioned (9/176, 5.1% and 2/176, 1.1%, respectively) In postoperative CT-base classification systems were evaluated. No screw placements were evaluated as unacceptable under any of these systems. Kappa statistics revealed no significant differences between the proposed system and the aforementioned classification systems (P <0.001). Our results revealed no significant differences between the intraoperative robotic grading system and various postoperative CT-based grading systems. The robotic grading classification system is a feasible method for evaluating the accuracy of K-wire placements. Using the intraoperative robot grading system to classify the accuracy of K-wire placements enables predicting the postoperative accuracy of
Tsai, Tai-Hsin; Wu, Dong-Syuan; Su, Yu-Feng; Wu, Chieh-Hsin; Lin, Chih-Lung
2016-09-01
This purpose of this retrospective study is validation of an intraoperative robotic grading classification system for assessing the accuracy of Kirschner-wire (K-wire) placements with the postoperative computed tomography (CT)-base classification system for assessing the accuracy of pedicle screw placements.We conducted a retrospective review of prospectively collected data from 35 consecutive patients who underwent 176 robotic assisted pedicle screws instrumentation at Kaohsiung Medical University Hospital from September 2014 to November 2015. During the operation, we used a robotic grading classification system for verifying the intraoperative accuracy of K-wire placements. Three months after surgery, we used the common CT-base classification system to assess the postoperative accuracy of pedicle screw placements. The distributions of accuracy between the intraoperative robot-assisted and various postoperative CT-based classification systems were compared using kappa statistics of agreement.The intraoperative accuracies of K-wire placements before and after repositioning were classified as excellent (131/176, 74.4% and 133/176, 75.6%, respectively), satisfactory (36/176, 20.5% and 41/176, 23.3%, respectively), and malpositioned (9/176, 5.1% and 2/176, 1.1%, respectively)In postoperative CT-base classification systems were evaluated. No screw placements were evaluated as unacceptable under any of these systems. Kappa statistics revealed no significant differences between the proposed system and the aforementioned classification systems (P <0.001).Our results revealed no significant differences between the intraoperative robotic grading system and various postoperative CT-based grading systems. The robotic grading classification system is a feasible method for evaluating the accuracy of K-wire placements. Using the intraoperative robot grading system to classify the accuracy of K-wire placements enables predicting the postoperative accuracy of pedicle screw
Avelar, Erick; Durst, Ronen; Rosito, Guido A; Thangaroopan, Molly; Kumar, Simi; Tournoux, Francois; Chan, Raymond C; Hung, Judy; Hoffmann, Udo; Abbara, Suhny; Brady, Thomas; Cury, Ricardo C
2010-07-01
Left atrial (LA) volume is an important prognostic factor in cardiovascular disease. Multidetector computed tomography (MDCT) is an emerging cardiac imaging modality; however, its accuracy in measuring the LA volume has not been well studied. The aim of our study was to determine the accuracy of MDCT in quantifying the LA volume. A total of 48 patients underwent MDCT and 2-dimensional (2D) echocardiography (2DE) on the same day. The area-length and Simpson's methods were used to obtain the 2D echocardiographic LA volume. The LA volume assessment by MDCT was obtained using the modified Simpson's method. Four artificial phantoms were created, and their true volume was assessed by an independent observer using both imaging modalities. The correlation between the LA volume by MDCT and 2DE was significant (r = 0.68). The mean 2D echocardiographic LA volume was lower than the LA volume obtained with MDCT (2DE 79 +/- 37 vs MDCT 103 +/- 32, p <0.05). In the phantom experiment, the volume obtained using MDCT and 2DE correlated significantly with the true volume (r = 0.97, p <0.05 vs r = 0.96, p <0.05, respectively). However, the mean 2D echocardiographic phantom volume was 16% lower than the true volume (2DE, Simpson's method 53 +/- 24 vs the true volume 61 +/- 24, p <0.05). The mean volume calculated using MDCT did not differ from the true volume (MDCT 60 +/- 21 vs true volume 61 +/- 24, p = NS). 2DE appeared to systematically underestimate the LA volume compared to phantom and cardiac MDCT, suggesting that different normal cutoff values should be used for each modality. In conclusion, LA volume quantification using MDCT is an accurate and feasible method. PMID:20609656
Sang, Yan-Hui; Hu, Hong-Cheng; Lu, Song-He; Wu, Yu-Wei; Li, Wei-Ran; Tang, Zhi-Hui
2016-01-01
Background: The accuracy of three-dimensional (3D) reconstructions from cone-beam computed tomography (CBCT) has been particularly important in dentistry, which will affect the effectiveness of diagnosis, treatment plan, and outcome in clinical practice. The aims of this study were to assess the linear, volumetric, and geometric accuracy of 3D reconstructions from CBCT and to investigate the influence of voxel size and CBCT system on the reconstructions results. Methods: Fifty teeth from 18 orthodontic patients were assigned to three groups as NewTom VG 0.15 mm group (NewTom VG; voxel size: 0.15 mm; n = 17), NewTom VG 0.30 mm group (NewTom VG; voxel size: 0.30 mm; n = 16), and VATECH DCTPRO 0.30 mm group (VATECH DCTPRO; voxel size: 0.30 mm; n = 17). The 3D reconstruction models of the teeth were segmented from CBCT data manually using Mimics 18.0 (Materialise Dental, Leuven, Belgium), and the extracted teeth were scanned by 3Shape optical scanner (3Shape A/S, Denmark). Linear and volumetric deviations were separately assessed by comparing the length and volume of the 3D reconstruction model with physical measurement by paired t-test. Geometric deviations were assessed by the root mean square value of the imposed 3D reconstruction and optical models by one-sample t-test. To assess the influence of voxel size and CBCT system on 3D reconstruction, analysis of variance (ANOVA) was used (α = 0.05). Results: The linear, volumetric, and geometric deviations were −0.03 ± 0.48 mm, −5.4 ± 2.8%, and 0.117 ± 0.018 mm for NewTom VG 0.15 mm group; −0.45 ± 0.42 mm, −4.5 ± 3.4%, and 0.116 ± 0.014 mm for NewTom VG 0.30 mm group; and −0.93 ± 0.40 mm, −4.8 ± 5.1%, and 0.194 ± 0.117 mm for VATECH DCTPRO 0.30 mm group, respectively. There were statistically significant differences between groups in terms of linear measurement (P < 0.001), but no significant difference in terms of volumetric measurement (P = 0.774). No statistically significant difference were
Experiences with Efficient Methodologies for Teaching Computer Programming to Geoscientists
ERIC Educational Resources Information Center
Jacobs, Christian T.; Gorman, Gerard J.; Rees, Huw E.; Craig, Lorraine E.
2016-01-01
Computer programming was once thought of as a skill required only by professional software developers. But today, given the ubiquitous nature of computation and data science it is quickly becoming necessary for all scientists and engineers to have at least a basic knowledge of how to program. Teaching how to program, particularly to those students…
Gong Xing; Glick, Stephen J.; Liu, Bob; Vedula, Aruna A.; Thacker, Samta
2006-04-15
Although conventional mammography is currently the best modality to detect early breast cancer, it is limited in that the recorded image represents the superposition of a three-dimensional (3D) object onto a 2D plane. Recently, two promising approaches for 3D volumetric breast imaging have been proposed, breast tomosynthesis (BT) and CT breast imaging (CTBI). To investigate possible improvements in lesion detection accuracy with either breast tomosynthesis or CT breast imaging as compared to digital mammography (DM), a computer simulation study was conducted using simulated lesions embedded into a structured 3D breast model. The computer simulation realistically modeled x-ray transport through a breast model, as well as the signal and noise propagation through a CsI based flat-panel imager. Polyenergetic x-ray spectra of Mo/Mo 28 kVp for digital mammography, Mo/Rh 28 kVp for BT, and W/Ce 50 kVp for CTBI were modeled. For the CTBI simulation, the intensity of the x-ray spectra for each projection view was determined so as to provide a total average glandular dose of 4 mGy, which is approximately equivalent to that given in conventional two-view screening mammography. The same total dose was modeled for both the DM and BT simulations. Irregular lesions were simulated by using a stochastic growth algorithm providing lesions with an effective diameter of 5 mm. Breast tissue was simulated by generating an ensemble of backgrounds with a power law spectrum, with the composition of 50% fibroglandular and 50% adipose tissue. To evaluate lesion detection accuracy, a receiver operating characteristic (ROC) study was performed with five observers reading an ensemble of images for each case. The average area under the ROC curves (A{sub z}) was 0.76 for DM, 0.93 for BT, and 0.94 for CTBI. Results indicated that for the same dose, a 5 mm lesion embedded in a structured breast phantom was detected by the two volumetric breast imaging systems, BT and CTBI, with statistically
Ramos-Méndez, José; Perl, Joseph; Faddegon, Bruce; Schümann, Jan; Paganetti, Harald
2013-01-01
Purpose: To present the implementation and validation of a geometrical based variance reduction technique for the calculation of phase space data for proton therapy dose calculation. Methods: The treatment heads at the Francis H Burr Proton Therapy Center were modeled with a new Monte Carlo tool (TOPAS based on Geant4). For variance reduction purposes, two particle-splitting planes were implemented. First, the particles were split upstream of the second scatterer or at the second ionization chamber. Then, particles reaching another plane immediately upstream of the field specific aperture were split again. In each case, particles were split by a factor of 8. At the second ionization chamber and at the latter plane, the cylindrical symmetry of the proton beam was exploited to position the split particles at randomly spaced locations rotated around the beam axis. Phase space data in IAEA format were recorded at the treatment head exit and the computational efficiency was calculated. Depth–dose curves and beam profiles were analyzed. Dose distributions were compared for a voxelized water phantom for different treatment fields for both the reference and optimized simulations. In addition, dose in two patients was simulated with and without particle splitting to compare the efficiency and accuracy of the technique. Results: A normalized computational efficiency gain of a factor of 10–20.3 was reached for phase space calculations for the different treatment head options simulated. Depth–dose curves and beam profiles were in reasonable agreement with the simulation done without splitting: within 1% for depth–dose with an average difference of (0.2 ± 0.4)%, 1 standard deviation, and a 0.3% statistical uncertainty of the simulations in the high dose region; 1.6% for planar fluence with an average difference of (0.4 ± 0.5)% and a statistical uncertainty of 0.3% in the high fluence region. The percentage differences between dose distributions in water for
Ramos-Mendez, Jose; Perl, Joseph; Faddegon, Bruce; Schuemann, Jan; Paganetti, Harald
2013-04-15
Purpose: To present the implementation and validation of a geometrical based variance reduction technique for the calculation of phase space data for proton therapy dose calculation. Methods: The treatment heads at the Francis H Burr Proton Therapy Center were modeled with a new Monte Carlo tool (TOPAS based on Geant4). For variance reduction purposes, two particle-splitting planes were implemented. First, the particles were split upstream of the second scatterer or at the second ionization chamber. Then, particles reaching another plane immediately upstream of the field specific aperture were split again. In each case, particles were split by a factor of 8. At the second ionization chamber and at the latter plane, the cylindrical symmetry of the proton beam was exploited to position the split particles at randomly spaced locations rotated around the beam axis. Phase space data in IAEA format were recorded at the treatment head exit and the computational efficiency was calculated. Depth-dose curves and beam profiles were analyzed. Dose distributions were compared for a voxelized water phantom for different treatment fields for both the reference and optimized simulations. In addition, dose in two patients was simulated with and without particle splitting to compare the efficiency and accuracy of the technique. Results: A normalized computational efficiency gain of a factor of 10-20.3 was reached for phase space calculations for the different treatment head options simulated. Depth-dose curves and beam profiles were in reasonable agreement with the simulation done without splitting: within 1% for depth-dose with an average difference of (0.2 {+-} 0.4)%, 1 standard deviation, and a 0.3% statistical uncertainty of the simulations in the high dose region; 1.6% for planar fluence with an average difference of (0.4 {+-} 0.5)% and a statistical uncertainty of 0.3% in the high fluence region. The percentage differences between dose distributions in water for simulations
Morrison-Beedy, Dianne; Carey, Michael P; Tu, Xin
2006-09-01
This study examined the accuracy of two retrospective methods and assessment intervals for recall of sexual behavior and assessed predictors of recall accuracy. Using a 2 [mode: audio-computer assisted self-interview (ACASI) vs. self-administered questionnaire (SAQ)] by 2 (frequency: monthly vs. quarterly) design, young women (N =102) were randomly assigned to one of four conditions. Participants completed baseline measures, monitored their behavior with a daily diary, and returned monthly (or quarterly) for assessments. A mixed pattern of accuracy between the four assessment methods was identified. Monthly assessments yielded more accurate recall for protected and unprotected vaginal sex but quarterly assessments yielded more accurate recall for unprotected oral sex. Mode differences were not strong, and hypothesized predictors of accuracy tended not to be associated with recall accuracy. Choice of assessment mode and frequency should be based upon the research question(s), population, resources, and context in which data collection will occur. PMID:16721506
Oltean, Gabriel; Ivanciu, Laura-Nicoleta
2016-01-01
The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the
On-board computational efficiency in real time UAV embedded terrain reconstruction
NASA Astrophysics Data System (ADS)
Partsinevelos, Panagiotis; Agadakos, Ioannis; Athanasiou, Vasilis; Papaefstathiou, Ioannis; Mertikas, Stylianos; Kyritsis, Sarantis; Tripolitsiotis, Achilles; Zervos, Panagiotis
2014-05-01
In the last few years, there is a surge of applications for object recognition, interpretation and mapping using unmanned aerial vehicles (UAV). Specifications in constructing those UAVs are highly diverse with contradictory characteristics including cost-efficiency, carrying weight, flight time, mapping precision, real time processing capabilities, etc. In this work, a hexacopter UAV is employed for near real time terrain mapping. The main challenge addressed is to retain a low cost flying platform with real time processing capabilities. The UAV weight limitation affecting the overall flight time, makes the selection of the on-board processing components particularly critical. On the other hand, surface reconstruction, as a computational demanding task, calls for a highly demanding processing unit on board. To merge these two contradicting aspects along with customized development, a System on a Chip (SoC) integrated circuit is proposed as a low-power, low-cost processor, which natively supports camera sensors and positioning and navigation systems. Modern SoCs, such as Omap3530 or Zynq, are classified as heterogeneous devices and provide a versatile platform, allowing access to both general purpose processors, such as the ARM11, as well as specialized processors, such as a digital signal processor and floating field-programmable gate array. A UAV equipped with the proposed embedded processors, allows on-board terrain reconstruction using stereo vision in near real time. Furthermore, according to the frame rate required, additional image processing may concurrently take place, such as image rectification andobject detection. Lastly, the onboard positioning and navigation (e.g., GNSS) chip may further improve the quality of the generated map. The resulting terrain maps are compared to ground truth geodetic measurements in order to access the accuracy limitations of the overall process. It is shown that with our proposed novel system,there is much potential in
Oltean, Gabriel; Ivanciu, Laura-Nicoleta
2016-01-01
The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the
Plotnikov, Nikolay V
2014-08-12
Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.
ERIC Educational Resources Information Center
Ward, Hugh C., Jr.
A study was undertaken to explore whether students using an advance organizer-metacognitive learning strategy would be less anxious, more self-directing, more efficient, and more self-confident when learning unknown computer applications software than students using traditional computer software learning strategies. The first experiment was…
NASA Astrophysics Data System (ADS)
Howell, Bryan; McIntyre, Cameron C.
2016-06-01
Objective. Deep brain stimulation (DBS) is an adjunctive therapy that is effective in treating movement disorders and shows promise for treating psychiatric disorders. Computational models of DBS have begun to be utilized as tools to optimize the therapy. Despite advancements in the anatomical accuracy of these models, there is still uncertainty as to what level of electrical complexity is adequate for modeling the electric field in the brain and the subsequent neural response to the stimulation. Approach. We used magnetic resonance images to create an image-based computational model of subthalamic DBS. The complexity of the volume conductor model was increased by incrementally including heterogeneity, anisotropy, and dielectric dispersion in the electrical properties of the brain. We quantified changes in the load of the electrode, the electric potential distribution, and stimulation thresholds of descending corticofugal (DCF) axon models. Main results. Incorporation of heterogeneity altered the electric potentials and subsequent stimulation thresholds, but to a lesser degree than incorporation of anisotropy. Additionally, the results were sensitive to the choice of method for defining anisotropy, with stimulation thresholds of DCF axons changing by as much as 190%. Typical approaches for defining anisotropy underestimate the expected load of the stimulation electrode, which led to underestimation of the extent of stimulation. More accurate predictions of the electrode load were achieved with alternative approaches for defining anisotropy. The effects of dielectric dispersion were small compared to the effects of heterogeneity and anisotropy. Significance. The results of this study help delineate the level of detail that is required to accurately model electric fields generated by DBS electrodes.
Efficient reinforcement learning: computational theories, neuroscience and robotics.
Kawato, Mitsuo; Samejima, Kazuyuki
2007-04-01
Reinforcement learning algorithms have provided some of the most influential computational theories for behavioral learning that depends on reward and penalty. After briefly reviewing supporting experimental data, this paper tackles three difficult theoretical issues that remain to be explored. First, plain reinforcement learning is much too slow to be considered a plausible brain model. Second, although the temporal-difference error has an important role both in theory and in experiments, how to compute it remains an enigma. Third, function of all brain areas, including the cerebral cortex, cerebellum, brainstem and basal ganglia, seems to necessitate a new computational framework. Computational studies that emphasize meta-parameters, hierarchy, modularity and supervised learning to resolve these issues are reviewed here, together with the related experimental data.
Efficient computation of root mean square deviations under rigid transformations.
Hildebrandt, Anna K; Dietzen, Matthias; Lengauer, Thomas; Lenhof, Hans-Peter; Althaus, Ernst; Hildebrandt, Andreas
2014-04-15
The computation of root mean square deviations (RMSD) is an important step in many bioinformatics applications. If approached naively, each RMSD computation takes time linear in the number of atoms. In addition, a careful implementation is required to achieve numerical stability, which further increases runtimes. In practice, the structural variations under consideration are often induced by rigid transformations of the protein, or are at least dominated by a rigid component. In this work, we show how RMSD values resulting from rigid transformations can be computed in constant time from the protein's covariance matrix, which can be precomputed in linear time. As a typical application scenario is protein clustering, we will also show how the Ward-distance which is popular in this field can be reduced to RMSD evaluations, yielding a constant time approach for their computation.
Zinser, Max J; Mischkowski, Robert A; Dreiseidler, Timo; Thamm, Oliver C; Rothamel, Daniel; Zöller, Joachim E
2013-12-01
There may well be a shift towards 3-dimensional orthognathic surgery when virtual surgical planning can be applied clinically. We present a computer-assisted protocol that uses surgical navigation supplemented by an interactive image-guided visualisation display (IGVD) to transfer virtual maxillary planning precisely. The aim of this study was to analyse its accuracy and versatility in vivo. The protocol consists of maxillofacial imaging, diagnosis, planning of virtual treatment, and intraoperative surgical transfer using an IGV display. The advantage of the interactive IGV display is that the virtually planned maxilla and its real position can be completely superimposed during operation through a video graphics array (VGA) camera, thereby augmenting the surgeon's 3-dimensional perception. Sixteen adult class III patients were treated with by bimaxillary osteotomy. Seven hard tissue variables were chosen to compare (ΔT1-T0) the virtual maxillary planning (T0) with the postoperative result (T1) using 3-dimensional cephalometry. Clinically acceptable precision for the surgical planning transfer of the maxilla (<0.35 mm) was seen in the anteroposterior and mediolateral angles, and in relation to the skull base (<0.35°), and marginal precision was seen in the orthogonal dimension (<0.64 mm). An interactive IGV display complemented surgical navigation, augmented virtual and real-time reality, and provided a precise technique of waferless stereotactic maxillary positioning, which may offer an alternative approach to the use of arbitrary splints and 2-dimensional orthognathic planning.
Gaia, Bruno Felipe; Pinheiro, Lucas Rodrigues; Umetsubo, Otávio Shoite; Santos, Oseas; Costa, Felipe Ferreira; Cavalcanti, Marcelo Gusmão Paraíso
2014-03-01
Our purpose was to compare the accuracy and reliability of linear measurements for Le Fort I osteotomy using volume rendering software. We studied 11 dried skulls and used cone-beam computed tomography (CT) to generate 3-dimensional images. Linear measurements were based on craniometric anatomical landmarks that were predefined as specifically used for Le Fort I osteotomy, and identified twice each by 2 radiologists, independently, using Dolphin imaging version 11.5.04.35. A third examiner then made physical measurements using digital calipers. There was a significant difference between Dolphin imaging and the gold standard, particularly in the pterygoid process. The largest difference was 1.85mm (LLpPtg L). The mean differences between the physical and the 3-dimensional linear measurements ranged from -0.01 to 1.12mm for examiner 1, and 0 to 1.85mm for examiner 2. Interexaminer analysis ranged from 0.51 to 0.93. Intraexaminer correlation coefficients ranged from 0.81 to 0.96 and 0.57 to 0.92, for examiners 1 and 2, respectively. We conclude that the Dolphin imaging should be used sparingly during Le Fort I osteotomy.
Ma, J; Wittek, A; Singh, S; Joldes, G; Washio, T; Chinzei, K; Miller, K
2010-12-01
In this paper, the accuracy of non-linear finite element computations in application to surgical simulation was evaluated by comparing the experiment and modelling of indentation of the human brain phantom. The evaluation was realised by comparing forces acting on the indenter and the deformation of the brain phantom. The deformation of the brain phantom was measured by tracking 3D motions of X-ray opaque markers, placed within the brain phantom using a custom-built bi-plane X-ray image intensifier system. The model was implemented using the ABAQUS(TM) finite element solver. Realistic geometry obtained from magnetic resonance images and specific constitutive properties determined through compression tests were used in the model. The model accurately predicted the indentation force-displacement relations and marker displacements. Good agreement between modelling and experimental results verifies the reliability of the finite element modelling techniques used in this study and confirms the predictive power of these techniques in surgical simulation. PMID:21153973
NASA Astrophysics Data System (ADS)
Kim, E.; Bowsher, J.; Thomas, A. S.; Sakhalkar, H.; Dewhirst, M.; Oldham, M.
2008-10-01
Optical computed tomography (optical-CT) and optical-emission computed tomography (optical-ECT) are new techniques for imaging the 3D structure and function (including gene expression) of whole unsectioned tissue samples. This work presents a method of improving the quantitative accuracy of optical-ECT by correcting for the 'self'-attenuation of photons emitted within the sample. The correction is analogous to a method commonly applied in single-photon-emission computed tomography reconstruction. The performance of the correction method was investigated by application to a transparent cylindrical gelatin phantom, containing a known distribution of attenuation (a central ink-doped gelatine core) and a known distribution of fluorescing fibres. Attenuation corrected and uncorrected optical-ECT images were reconstructed on the phantom to enable an evaluation of the effectiveness of the correction. Significant attenuation artefacts were observed in the uncorrected images where the central fibre appeared ~24% less intense due to greater attenuation from the surrounding ink-doped gelatin. This artefact was almost completely removed in the attenuation-corrected image, where the central fibre was within ~4% of the others. The successful phantom test enabled application of attenuation correction to optical-ECT images of an unsectioned human breast xenograft tumour grown subcutaneously on the hind leg of a nude mouse. This tumour cell line had been genetically labelled (pre-implantation) with fluorescent reporter genes such that all viable tumour cells expressed constitutive red fluorescent protein and hypoxia-inducible factor 1 transcription-produced green fluorescent protein. In addition to the fluorescent reporter labelling of gene expression, the tumour microvasculature was labelled by a light-absorbing vasculature contrast agent delivered in vivo by tail-vein injection. Optical-CT transmission images yielded high-resolution 3D images of the absorbing contrast agent, and
Limits on efficient computation in the physical world
NASA Astrophysics Data System (ADS)
Aaronson, Scott Joel
More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In particular, any quantum algorithm that solves the collision problem---that of deciding whether a sequence of n integers is one-to-one or two-to-one---must query the sequence O (n1/5) times. This resolves a question that was open for years; previously no lower bound better than constant was known. A corollary is that there is no "black-box" quantum algorithm to break cryptographic hash functions or solve the Graph Isomorphism problem in polynomial time. I also show that relative to an oracle, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform "quantum advice states"; and that any quantum algorithm needs O (2n/4/n) queries to find a local minimum of a black-box function on the n-dimensional hypercube. Surprisingly, the latter result also leads to new classical lower bounds for the local search problem. Finally, I give new lower bounds on quantum one-way communication complexity, and on the quantum query complexity of total Boolean functions and recursive Fourier sampling. The second part of the thesis studies the relationship of the quantum computing model to physical reality. I first examine the arguments of Leonid Levin, Stephen Wolfram, and others who believe quantum computing to be fundamentally impossible. I find their arguments unconvincing without a "Sure
Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows
NASA Technical Reports Server (NTRS)
Herrick, Gregory P.; Chen, Jen-Ping
2012-01-01
This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.
Computationally efficient calibration of WATCLASS Hydrologic models using surrogate optimization
NASA Astrophysics Data System (ADS)
Kamali, M.; Ponnambalam, K.; Soulis, E. D.
2007-07-01
In this approach, exploration of the cost function space was performed with an inexpensive surrogate function, not the expensive original function. The Design and Analysis of Computer Experiments(DACE) surrogate function, which is one type of approximate models, which takes correlation function for error was employed. The results for Monte Carlo Sampling, Latin Hypercube Sampling and Design and Analysis of Computer Experiments(DACE) approximate model have been compared. The results show that DACE model has a good potential for predicting the trend of simulation results. The case study of this document was WATCLASS hydrologic model calibration on Smokey-River watershed.
Efficient computational simulation of actin stress fiber remodeling.
Ristori, T; Obbink-Huizer, C; Oomens, C W J; Baaijens, F P T; Loerakker, S
2016-09-01
Understanding collagen and stress fiber remodeling is essential for the development of engineered tissues with good functionality. These processes are complex, highly interrelated, and occur over different time scales. As a result, excessive computational costs are required to computationally predict the final organization of these fibers in response to dynamic mechanical conditions. In this study, an analytical approximation of a stress fiber remodeling evolution law was derived. A comparison of the developed technique with the direct numerical integration of the evolution law showed relatively small differences in results, and the proposed method is one to two orders of magnitude faster.
Efficient computational simulation of actin stress fiber remodeling.
Ristori, T; Obbink-Huizer, C; Oomens, C W J; Baaijens, F P T; Loerakker, S
2016-09-01
Understanding collagen and stress fiber remodeling is essential for the development of engineered tissues with good functionality. These processes are complex, highly interrelated, and occur over different time scales. As a result, excessive computational costs are required to computationally predict the final organization of these fibers in response to dynamic mechanical conditions. In this study, an analytical approximation of a stress fiber remodeling evolution law was derived. A comparison of the developed technique with the direct numerical integration of the evolution law showed relatively small differences in results, and the proposed method is one to two orders of magnitude faster. PMID:26823159
An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing
Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei
2016-01-01
Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users’ costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers’ resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center’s energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically. PMID:26901201
An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.
Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei
2016-02-18
Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically.
An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.
Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei
2016-01-01
Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically. PMID:26901201
Efficient algorithm to compute mutually connected components in interdependent networks.
Hwang, S; Choi, S; Lee, Deokjae; Kahng, B
2015-02-01
Mutually connected components (MCCs) play an important role as a measure of resilience in the study of interdependent networks. Despite their importance, an efficient algorithm to obtain the statistics of all MCCs during the removal of links has thus far been absent. Here, using a well-known fully dynamic graph algorithm, we propose an efficient algorithm to accomplish this task. We show that the time complexity of this algorithm is approximately O(N(1.2)) for random graphs, which is more efficient than O(N(2)) of the brute-force algorithm. We confirm the correctness of our algorithm by comparing the behavior of the order parameter as links are removed with existing results for three types of double-layer multiplex networks. We anticipate that this algorithm will be used for simulations of large-size systems that have been previously inaccessible. PMID:25768559
Towards efficient backward-in-time adjoint computations using data compression techniques
Cyr, E. C.; Shadid, J. N.; Wildey, T.
2014-12-16
In the context of a posteriori error estimation for nonlinear time-dependent partial differential equations, the state-of-the-practice is to use adjoint approaches which require the solution of a backward-in-time problem defined by a linearization of the forward problem. One of the major obstacles in the practical application of these approaches, we found, is the need to store, or recompute, the forward solution to define the adjoint problem and to evaluate the error representation. Our study considers the use of data compression techniques to approximate forward solutions employed in the backward-in-time integration. The development derives an error representation that accounts for the difference between the standard-approach and the compressed approximation of the forward solution. This representation is algorithmically similar to the standard representation and only requires the computation of the quantity of interest for the forward solution and the data-compressed reconstructed solution (i.e. scalar quantities that can be evaluated as the forward problem is integrated). This approach is then compared with existing techniques, such as checkpointing and time-averaged adjoints. Lastly, we provide numerical results indicating the potential efficiency of our approach on a transient diffusion–reaction equation and on the Navier–Stokes equations. These results demonstrate memory compression ratios up to 450×450× while maintaining reasonable accuracy in the error-estimates.
Towards efficient backward-in-time adjoint computations using data compression techniques
Cyr, E. C.; Shadid, J. N.; Wildey, T.
2014-12-16
In the context of a posteriori error estimation for nonlinear time-dependent partial differential equations, the state-of-the-practice is to use adjoint approaches which require the solution of a backward-in-time problem defined by a linearization of the forward problem. One of the major obstacles in the practical application of these approaches, we found, is the need to store, or recompute, the forward solution to define the adjoint problem and to evaluate the error representation. Our study considers the use of data compression techniques to approximate forward solutions employed in the backward-in-time integration. The development derives an error representation that accounts for themore » difference between the standard-approach and the compressed approximation of the forward solution. This representation is algorithmically similar to the standard representation and only requires the computation of the quantity of interest for the forward solution and the data-compressed reconstructed solution (i.e. scalar quantities that can be evaluated as the forward problem is integrated). This approach is then compared with existing techniques, such as checkpointing and time-averaged adjoints. Lastly, we provide numerical results indicating the potential efficiency of our approach on a transient diffusion–reaction equation and on the Navier–Stokes equations. These results demonstrate memory compression ratios up to 450×450× while maintaining reasonable accuracy in the error-estimates.« less
BINGO: a code for the efficient computation of the scalar bi-spectrum
Hazra, Dhiraj Kumar; Sriramkumar, L.; Martin, Jérôme E-mail: sriram@physics.iitm.ac.in
2013-05-01
We present a new and accurate Fortran code, the BI-spectra and Non-Gaussianity Operator (BINGO), for the efficient numerical computation of the scalar bi-spectrum and the non-Gaussianity parameter f{sub NL} in single field inflationary models involving the canonical scalar field. The code can calculate all the different contributions to the bi-spectrum and the parameter f{sub NL} for an arbitrary triangular configuration of the wavevectors. Focusing firstly on the equilateral limit, we illustrate the accuracy of BINGO by comparing the results from the code with the spectral dependence of the bi-spectrum expected in power law inflation. Then, considering an arbitrary triangular configuration, we contrast the numerical results with the analytical expression available in the slow roll limit, for, say, the case of the conventional quadratic potential. Considering a non-trivial scenario involving deviations from slow roll, we compare the results from the code with the analytical results that have recently been obtained in the case of the Starobinsky model in the equilateral limit. As an immediate application, we utilize BINGO to examine of the power of the non-Gaussianity parameter f{sub NL} to discriminate between various inflationary models that admit departures from slow roll and lead to similar features in the scalar power spectrum. We close with a summary and discussion on the implications of the results we obtain.
Learning with Computer-Based Multimedia: Gender Effects on Efficiency
ERIC Educational Resources Information Center
Pohnl, Sabine; Bogner, Franz X.
2012-01-01
Up to now, only a few studies in multimedia learning have focused on gender effects. While research has mostly focused on learning success, the effect of gender on instructional efficiency (IE) has not yet been considered. Consequently, we used a quasi-experimental design to examine possible gender differences in the learning success, mental…
College Students' Reading Efficiency with Computer-Presented Text.
ERIC Educational Resources Information Center
Wepner, Shelley B.; Feeley, Joan T.
Focusing on improving college students' reading efficiency, a study investigated whether a commercially-prepared computerized speed reading package, Speed Reader II, could be utilized as effectively as traditionally printed text. Subjects were 70 college freshmen from a college reading and rate improvement course with borderline scores on the…
Sato, Koji; Kanemura, Tokumi; Iwase, Toshiki; Togawa, Daisuke; Matsuyama, Yukihiro
2016-01-01
Study Design Retrospective. Purpose This study aims to investigate the accuracy of the oblique fluoroscopic view, based on preoperative computed tomography (CT) images for accurate placement of lumbosacral percutaneous pedicle screws (PPS). Overview of Literature Although PPS misplacement has been reported as one of the main complications in minimally invasive spine surgery, there is no comparative data on the misplacement rate among different fluoroscopic techniques, or comparing such techniques with open procedures. Methods We retrospectively selected 230 consecutive patients who underwent posterior spinal fusion with a pedicle screw construct for degenerative lumbar disease, and divided them into 3 groups, those who had undergone: minimally invasive percutaneous procedure using biplane (lateral and anterior-posterior views using a single C-arm) fluoroscope views (group M-1), minimally invasive percutaneous procedure using the oblique fluoroscopic view based on preoperative CT (group M-2), and conventional open procedure using a lateral fluoroscopic view (group O: controls). The relative position of the screw to the pedicle was graded for the pedicle breach as no breach, <2 mm, 2–4 mm, or >4 mm. Inaccuracy was calculated and assessed according to the spinal level, direction and neurological deficit. Inter-group radiation exposure was estimated using fluoroscopy time. Results Inaccuracy involved an incline toward L5, causing medial or lateral perforation of pedicles in group M-1, but it was distributed relatively equally throughout multiple levels in groups M-2 and controls. The mean fluoroscopy time/case ranged from 1.6 to 3.9 minutes. Conclusions Minimally invasive lumbosacral PPS placement using the conventional fluoroscopic technique carries an increased risk of inaccurate screw placement and resultant neurological deficits, compared with that of the open procedure. Inaccuracy tended to be distributed between medial and lateral perforations of the L5 pedicle
Mokhtari, Hadi; Niknami, Mahdi; Mokhtari Zonouzi, Hamid Reza; Sohrabi, Aydin; Ghasemi, Negin; Akbari Golzar, Amir
2016-01-01
Introduction: The aim of the present in vitro study was to compare the accuracy of cone-beam computed tomography (CBCT) in determining root canal morphology of mandibular first molars in comparison with staining and clearing technique. Methods and Materials: CBCT images were taken from 96 extracted human mandibular first molars and the teeth were then evaluated based on Vertucci’s classification to determine the root canal morphology. Afterwards, access cavities were prepared and India ink was injected into the canals with an insulin syringe. The teeth were demineralized with 5% nitric acid. Finally, the cleared teeth were evaluated under a magnifying glass at 5× magnification to determine the root canal morphology. Data were analyzed using the SPSS software. The Fisher’s exact test assessed the differences between the mesial and distal canals and the Cohen’s kappa test was used to assess the level of agreement between the methods. Statistical significance was defined at 0.05. Results: The Kappa coefficient for agreement between the two methods evaluating canal types was 0.346 (95% CI: 0.247-0.445), which is considered a fair level of agreement based on classification of Koch and Landis. The agreement between CBCT and Vertucci’s classification was 52.6% (95% CI: 45.54-59.66%), with a significantly higher agreement rate in the mesial canals (28.1%) compared to the distal canals (77.1%) (P<0.001). Conclusion: Under the limitations of this study, clearing technique was more accurate than CBCT in providing accurate picture of the root canal anatomy of mandibular first molars. PMID:27141216
Vanderstraeten, Barbara; Reynaert, Nick; Paelinck, Leen; Madani, Indira; Wagter, Carlos de; Gersem, Werner de; Neve, Wilfried de; Thierens, Hubert
2006-09-15
The accuracy of dose computation within the lungs depends strongly on the performance of the calculation algorithm in regions of electronic disequilibrium that arise near tissue inhomogeneities with large density variations. There is a lack of data evaluating the performance of highly developed analytical dose calculation algorithms compared to Monte Carlo computations in a clinical setting. We compared full Monte Carlo calculations (performed by our Monte Carlo dose engine MCDE) with two different commercial convolution/superposition (CS) implementations (Pinnacle-CS and Helax-TMS's collapsed cone model Helax-CC) and one pencil beam algorithm (Helax-TMS's pencil beam model Helax-PB) for 10 intensity modulated radiation therapy (IMRT) lung cancer patients. Treatment plans were created for two photon beam qualities (6 and 18 MV). For each dose calculation algorithm, patient, and beam quality, the following set of clinically relevant dose-volume values was reported: (i) minimal, median, and maximal dose (D{sub min}, D{sub 50}, and D{sub max}) for the gross tumor and planning target volumes (GTV and PTV); (ii) the volume of the lungs (excluding the GTV) receiving at least 20 and 30 Gy (V{sub 20} and V{sub 30}) and the mean lung dose; (iii) the 33rd percentile dose (D{sub 33}) and D{sub max} delivered to the heart and the expanded esophagus; and (iv) D{sub max} for the expanded spinal cord. Statistical analysis was performed by means of one-way analysis of variance for repeated measurements and Tukey pairwise comparison of means. Pinnacle-CS showed an excellent agreement with MCDE within the target structures, whereas the best correspondence for the organs at risk (OARs) was found between Helax-CC and MCDE. Results from Helax-PB were unsatisfying for both targets and OARs. Additionally, individual patient results were analyzed. Within the target structures, deviations above 5% were found in one patient for the comparison of MCDE and Helax-CC, while all differences
Vanderstraeten, Barbara; Reynaert, Nick; Paelinck, Leen; Madani, Indira; De Wagter, Carlos; De Gersem, Werner; De Neve, Wilfried; Thierens, Hubert
2006-09-01
The accuracy of dose computation within the lungs depends strongly on the performance of the calculation algorithm in regions of electronic disequilibrium that arise near tissue inhomogeneities with large density variations. There is a lack of data evaluating the performance of highly developed analytical dose calculation algorithms compared to Monte Carlo computations in a clinical setting. We compared full Monte Carlo calculations (performed by our Monte Carlo dose engine MCDE) with two different commercial convolution/superposition (CS) implementations (Pinnacle-CS and Helax-TMS's collapsed cone model Helax-CC) and one pencil beam algorithm (Helax-TMS's pencil beam model Helax-PB) for 10 intensity modulated radiation therapy (IMRT) lung cancer patients. Treatment plans were created for two photon beam qualities (6 and 18 MV). For each dose calculation algorithm, patient, and beam quality, the following set of clinically relevant dose-volume values was reported: (i) minimal, median, and maximal dose (Dmin, D50, and Dmax) for the gross tumor and planning target volumes (GTV and PTV); (ii) the volume of the lungs (excluding the GTV) receiving at least 20 and 30 Gy (V20 and V30) and the mean lung dose; (iii) the 33rd percentile dose (D33) and Dmax delivered to the heart and the expanded esophagus; and (iv) Dmax for the expanded spinal cord. Statistical analysis was performed by means of one-way analysis of variance for repeated measurements and Tukey pairwise comparison of means. Pinnacle-CS showed an excellent agreement with MCDE within the target structures, whereas the best correspondence for the organs at risk (OARs) was found between Helax-CC and MCDE. Results from Helax-PB were unsatisfying for both targets and OARs. Additionally, individual patient results were analyzed. Within the target structures, deviations above 5% were found in one patient for the comparison of MCDE and Helax-CC, while all differences between MCDE and Pinnacle-CS were below 5%. For both
A New Stochastic Computing Methodology for Efficient Neural Network Implementation.
Canals, Vincent; Morro, Antoni; Oliver, Antoni; Alomar, Miquel L; Rosselló, Josep L
2016-03-01
This paper presents a new methodology for the hardware implementation of neural networks (NNs) based on probabilistic laws. The proposed encoding scheme circumvents the limitations of classical stochastic computing (based on unipolar or bipolar encoding) extending the representation range to any real number using the ratio of two bipolar-encoded pulsed signals. Furthermore, the novel approach presents practically a total noise-immunity capability due to its specific codification. We introduce different designs for building the fundamental blocks needed to implement NNs. The validity of the present approach is demonstrated through a regression and a pattern recognition task. The low cost of the methodology in terms of hardware, along with its capacity to implement complex mathematical functions (such as the hyperbolic tangent), allows its use for building highly reliable systems and parallel computing.
Computationally efficient statistical differential equation modeling using homogenization
Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.
2013-01-01
Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.
[Efficiency of computed tomography in diagnosis of silicotuberculosis].
Naumenko, E S; Gol'del'man, A G; Tikhotskaia, L I; Zhovtiak, E P; Iarina, A L; Ershov, V I; Larina, E N
1998-01-01
The routine methods X-ray study and computed tomography (CT) were compared in a group of patients engaged in fireproof industry. CT yields valuable additional data in early silicotuberculosis, which makes it possible to follow the extent of a silicotuberculous process more completely, to make a better diagnosis of nodal and focal shadows, to identify small decay cavities in the foci and infiltrates. CT is the method of choice in following up patients with silicotuberculosis.
Chunking as the result of an efficiency computation trade-off
Ramkumar, Pavan; Acuna, Daniel E.; Berniker, Max; Grafton, Scott T.; Turner, Robert S.; Kording, Konrad P.
2016-01-01
How to move efficiently is an optimal control problem, whose computational complexity grows exponentially with the horizon of the planned trajectory. Breaking a compound movement into a series of chunks, each planned over a shorter horizon can thus reduce the overall computational complexity and associated costs while limiting the achievable efficiency. This trade-off suggests a cost-effective learning strategy: to learn new movements we should start with many short chunks (to limit the cost of computation). As practice reduces the impediments to more complex computation, the chunking structure should evolve to allow progressively more efficient movements (to maximize efficiency). Here we show that monkeys learning a reaching sequence over an extended period of time adopt this strategy by performing movements that can be described as locally optimal trajectories. Chunking can thus be understood as a cost-effective strategy for producing and learning efficient movements. PMID:27397420
NASA Astrophysics Data System (ADS)
Castruccio, S.; McInerney, D.; Stein, M. L.; Moyer, E. J.
2011-12-01
The computational demands of modern general circulation models (GCMs) limit their use in a number of areas. Model comparisons, understanding of the physics of climate behavior, and policy analysis would all benefit greatly from a means of reproducing the behavior of a full GCM with lower computational requirements. We show here that library-based statistical modeling can be used to accurately emulate GCM output for arbitrary trajectories of concentration of CO2. To demonstrate this, we constructed a library of runs made with the NCAR Community Climate System Model version 3 (CCSM3) at T31 resolution, and use a subset of the library and a simple statistical model that accounts for temporal autocorrelation and semilinear dependence to the past forcing history to emulate independent scenarios. The library to date consists of 18 forcing scenarios, both realistic (linear and logistical increases) and unrealistic (instantaneous increases or decreases), with most scenarios run with 5 different initial conditions and the longest run over 3000 years duration. We show that given a trajectory of CO2 concentrations, we can reproduce annual temperature and precipitation in several-hundred-year climate projections at scales from global to subcontinental to an accuracy within the intrinsic short-term variability of model output. Both the abilities and limitations of the fit shed light on physical climate processes. The statistical fit captures the characteristic responses of transient climates that depend on the rate of change of radiative forcing, including suppression of precipitation in conditions of rapid increases in radiative forcing. On the other hand, the same fit cannot be used to emulate conditions of rising and falling radiative forcing, showing basic differences in the physics of transient responses. Statistical fits are accurate on both global and subcontinental (32 regions worldwide) scales, with the regional fits demonstrating clear superiority over a linear
Efficient Helicopter Aerodynamic and Aeroacoustic Predictions on Parallel Computers
NASA Technical Reports Server (NTRS)
Wissink, Andrew M.; Lyrintzis, Anastasios S.; Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak
1996-01-01
This paper presents parallel implementations of two codes used in a combined CFD/Kirchhoff methodology to predict the aerodynamics and aeroacoustics properties of helicopters. The rotorcraft Navier-Stokes code, TURNS, computes the aerodynamic flowfield near the helicopter blades and the Kirchhoff acoustics code computes the noise in the far field, using the TURNS solution as input. The overall parallel strategy adds MPI message passing calls to the existing serial codes to allow for communication between processors. As a result, the total code modifications required for parallel execution are relatively small. The biggest bottleneck in running the TURNS code in parallel comes from the LU-SGS algorithm that solves the implicit system of equations. We use a new hybrid domain decomposition implementation of LU-SGS to obtain good parallel performance on the SP-2. TURNS demonstrates excellent parallel speedups for quasi-steady and unsteady three-dimensional calculations of a helicopter blade in forward flight. The execution rate attained by the code on 114 processors is six times faster than the same cases run on one processor of the Cray C-90. The parallel Kirchhoff code also shows excellent parallel speedups and fast execution rates. As a performance demonstration, unsteady acoustic pressures are computed at 1886 far-field observer locations for a sample acoustics problem. The calculation requires over two hundred hours of CPU time on one C-90 processor but takes only a few hours on 80 processors of the SP2. The resultant far-field acoustic field is analyzed with state of-the-art audio and video rendering of the propagating acoustic signals.
A computationally efficient QRS detection algorithm for wearable ECG sensors.
Wang, Y; Deepu, C J; Lian, Y
2011-01-01
In this paper we present a novel Dual-Slope QRS detection algorithm with low computational complexity, suitable for wearable ECG devices. The Dual-Slope algorithm calculates the slopes on both sides of a peak in the ECG signal; And based on these slopes, three criterions are developed for simultaneously checking 1)Steepness 2)Shape and 3)Height of the signal, to locate the QRS complex. The algorithm, evaluated against MIT/BIH Arrhythmia Database, achieves a very high detection rate of 99.45%, a sensitivity of 99.82% and a positive prediction of 99.63%. PMID:22255619
Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach
NASA Technical Reports Server (NTRS)
Warner, James E.; Hochhalter, Jacob D.
2016-01-01
This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.
An efficient computational tool for ramjet combustor research
Vanka, S.P.; Krazinski, J.L.; Nejad, A.S.
1988-01-01
A multigrid based calculation procedure is presented for the efficient solution of the time-averaged equations of a turbulent elliptic reacting flow. The equations are solved on a non-orthogonal curvilinear coordinate system. The physical models currently incorporated are a two equation k-epsilon turbulence model, a four-step chemical kinetics mechanism, and a Lagrangian particle tracking procedure applicable for dilute sprays. Demonstration calculations are presented to illustrate the performance of the calculation procedure for a ramjet dump combustor configuration. 21 refs., 9 figs., 2 tabs.
Component-based approach to robot vision for computational efficiency
NASA Astrophysics Data System (ADS)
Lee, Junhee; Kim, Dongsun; Park, Yeonchool; Park, Sooyong; Lee, Sukhan
2007-12-01
The purpose of this paper is to show merit and feasibility of the component based approach in robot system integration. Many methodologies such as 'component based approach, 'middle ware based approach' are suggested to integrate various complex functions on robot system efficiently. However, these methodologies are not used to robot function development broadly, because these 'Top-down' methodologies are modeled and researched in software engineering field, which are different from robot function researches, so that cannot be trusted by function developers. Developers' the main concern of these methodologies is the performance decreasing, which origins from overhead of a framework. This paper overcomes this misunderstanding by showing time performance increasing, when an experiment uses 'Self Healing, Adaptive and Growing softwarE (SHAGE)' framework, one of the component based framework. As an example of real robot function, visual object recognition is chosen to experiment.
NASA Astrophysics Data System (ADS)
Li, Haiyan; Huang, Yunbao; Jiang, Shaoen; Jing, Longfei; Ding, Yongkun
2015-08-01
Radiation flux computation on the target is very important for laser driven Inertial Confinement Fusion, and view-factor based equation models (MacFarlane, 2003; Srivastava et al., 2000) are often used to compute this radiation flux on the capsule or samples inside the hohlraum. However, the equation models do not lead to sparse matrices and may involve an intensive solution process when discrete mesh elements become smaller and the number of equations increases. An efficient approach for the computation of radiation flux is proposed in this paper, in which, (1) symmetric and positive definite properties are achieved by transformation, and (2) an efficient Cholesky factorization algorithm is applied to significantly accelerate such equations models solving process. Finally, two targets on a laser facility built in China are considered to validate the computing efficiency of present approach. The results show that the radiation flux computation can be accelerated by a factor of 2.
Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori
2015-05-01
The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of 〈UUV〉/2 (〈UUV〉 is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since 〈UUV〉 can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load. PMID:25956125
Enabling Efficient Climate Science Workflows in High Performance Computing Environments
NASA Astrophysics Data System (ADS)
Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.
2015-12-01
A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.
Computational efficiences for calculating rare earth f^n energies
NASA Astrophysics Data System (ADS)
Beck, Donald R.
2009-05-01
RecentlyootnotetextD. R. Beck and E. J. Domeier, Can. J. Phys. Walter Johnson issue, Jan. 2009., we have used new computational strategies to obtain wavefunctions and energies for Gd IV 4f^7 and 4f^65d levels. Here we extend one of these techniques to allow efficent inclusion of 4f^2 pair correlation effects using radial pair energies obtained from much simpler calculationsootnotetexte.g. K. Jankowski et al., Int. J. Quant. Chem. XXVII, 665 (1985). and angular factors which can be simply computedootnotetextD. R. Beck and C. A. Nicolaides, Excited States in Quantum Chemistry, C. A. Nicolaides and D. R. Beck (editors), D. Reidel (1978), p. 105ff.. This is a re-vitalization of an older ideaootnotetextI. Oksuz and O. Sinanoglu, Phys. Rev. 181, 54 (1969).. We display relationships between angular factors involving the exchange of holes and electrons (e.g. f^6 vs f^8, f^13d vs fd^9). We apply the results to Tb IV and Gd IV, whose spectra is largely unknown, but which may play a role in MRI medicine as endohedral metallofullerenes (e.g. Gd3N-C80ootnotetextM. C. Qian and S. N. Khanna, J. Appl. Phys. 101, 09E105 (2007).). Pr III results are in good agreement (910 cm-1) with experiment. Pu I 5f^2 radial pair energies are also presented.
Efficient computation of coherent synchrotron radiation in a rectangular chamber
NASA Astrophysics Data System (ADS)
Warnock, Robert L.; Bizzozero, David A.
2016-09-01
We study coherent synchrotron radiation (CSR) in a perfectly conducting vacuum chamber of rectangular cross section, in a formalism allowing an arbitrary sequence of bends and straight sections. We apply the paraxial method in the frequency domain, with a Fourier development in the vertical coordinate but with no other mode expansions. A line charge source is handled numerically by a new method that rids the equations of singularities through a change of dependent variable. The resulting algorithm is fast compared to earlier methods, works for short bunches with complicated structure, and yields all six field components at any space-time point. As an example we compute the tangential magnetic field at the walls. From that one can make a perturbative treatment of the Poynting flux to estimate the energy deposited in resistive walls. The calculation was motivated by a design issue for LCLS-II, the question of how much wall heating from CSR occurs in the last bend of a bunch compressor and the following straight section. Working with a realistic longitudinal bunch form of r.m.s. length 10.4 μ m and a charge of 100 pC we conclude that the radiated power is quite small (28 W at a 1 MHz repetition rate), and all radiated energy is absorbed in the walls within 7 m along the straight section.
An efficient network for interconnecting remote monitoring instruments and computers
Halbig, J.K.; Gainer, K.E.; Klosterbuer, S.F.
1994-08-01
Remote monitoring instrumentation must be connected with computers and other instruments. The cost and intrusiveness of installing cables in new and existing plants presents problems for the facility and the International Atomic Energy Agency (IAEA). The authors have tested a network that could accomplish this interconnection using mass-produced commercial components developed for use in industrial applications. Unlike components in the hardware of most networks, the components--manufactured and distributed in North America, Europe, and Asia--lend themselves to small and low-powered applications. The heart of the network is a chip with three microprocessors and proprietary network software contained in Read Only Memory. In addition to all nonuser levels of protocol, the software also contains message authentication capabilities. This chip can be interfaced to a variety of transmission media, for example, RS-485 lines, fiber topic cables, rf waves, and standard ac power lines. The use of power lines as the transmission medium in a facility could significantly reduce cabling costs.
Efficient computer algebra algorithms for polynomial matrices in control design
NASA Technical Reports Server (NTRS)
Baras, J. S.; Macenany, D. C.; Munach, R.
1989-01-01
The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.
A determination of antioxidant efficiencies using ESR and computational methods
NASA Astrophysics Data System (ADS)
Rhodes, Christopher J.; Tran, Thuy T.; Morris, Harry
2004-05-01
Using Transition-State Theory, experimental rate constants, determined over a range of temperatures, for reactions of Vitamin E type antioxidants are analysed in terms of their enthalpies and entropies of activation. It is further shown that computational methods may be employed to calculate enthalpies and entropies, and hence Gibbs free energies, for the overall reactions. Within the linear free energy relationship (LFER) assumption, that the Gibbs free energy of activation is proportional to the overall Gibbs free energy change for the reaction, it is possible to rationalise, and even to predict, the relative contributions of enthalpy and entropy for reactions of interest, involving potential antioxidants. A method is devised, involving a competitive reaction between rad CH 3 radicals and both the spin-trap PBN and the antioxidant, which enables the relatively rapid determination of a relative ordering of activities for a series of potential antioxidant compounds, and also of their rate constants for scavenging rad CH 3 radicals (relative to the rate constant for addition of rad CH 3 to PBN).
NASA Astrophysics Data System (ADS)
Zube, Nicholas Gerard; Zhang, Xi; Natraj, Vijay
2016-10-01
General circulation models often incorporate simple approximations of heating between vertically inhomogeneous layers rather than more accurate but computationally expensive radiative transfer (RT) methods. With the goal of developing a GCM package that can model both solar system bodies and exoplanets, it is vital to examine up-to-date RT models to optimize speed and accuracy for heat transfer calculations. Here, we examine a variety of interchangeable radiative transfer models in conjunction with MITGCM (Hill and Marshall, 1995). First, for atmospheric opacity calculations, we test gray approximation, line-by-line, and correlated-k methods. In combination with these, we also test RT routines using 2-stream DISORT (discrete ordinates RT), N-stream DISORT (Stamnes et al., 1988), and optimized 2-stream (Spurr and Natraj, 2011). Initial tests are run using Jupiter as an example case. The results can be compared in nine possible configurations for running a complete RT routine within a GCM. Each individual combination of opacity and RT methods is contrasted with the "ground truth" calculation provided by the line-by-line opacity and N-stream DISORT, in terms of computation speed and accuracy of the approximation methods. We also examine the effects on accuracy when performing these calculations at different time step frequencies within MITGCM. Ultimately, we will catalog and present the ideal RT routines that can replace commonly used approximations within a GCM for a significant increase in calculation accuracy, and speed comparable to the dynamical time steps of MITGCM. Future work will involve examining whether calculations in the spatial domain can also be reduced by smearing grid points into larger areas, and what effects this will have on overall accuracy.
ERIC Educational Resources Information Center
Robinson, Daniel H.; Schraw, Gregory
1994-01-01
Three experiments involving 138 college students investigated why one type of graphic organizer (a matrix) may communicate interconcept relations better than an outline or text. Results suggest that a matrix is more computationally efficient than either outline or text, allowing the easier computation of relationships. (SLD)
An Efficient Objective Analysis System for Parallel Computers
NASA Technical Reports Server (NTRS)
Stobie, J.
1999-01-01
A new atmospheric objective analysis system designed for parallel computers will be described. The system can produce a global analysis (on a 1 X 1 lat-lon grid with 18 levels of heights and winds and 10 levels of moisture) using 120,000 observations in 17 minutes on 32 CPUs (SGI Origin 2000). No special parallel code is needed (e.g. MPI or multitasking) and the 32 CPUs do not have to be on the same platform. The system is totally portable and can run on several different architectures at once. In addition, the system can easily scale up to 100 or more CPUS. This will allow for much higher resolution and significant increases in input data. The system scales linearly as the number of observations and the number of grid points. The cost overhead in going from 1 to 32 CPUs is 18%. In addition, the analysis results are identical regardless of the number of processors used. This system has all the characteristics of optimal interpolation, combining detailed instrument and first guess error statistics to produce the best estimate of the atmospheric state. Static tests with a 2 X 2.5 resolution version of this system showed it's analysis increments are comparable to the latest NASA operational system including maintenance of mass-wind balance. Results from several months of cycling test in the Goddard EOS Data Assimilation System (GEOS DAS) show this new analysis retains the same level of agreement between the first guess and observations (O-F statistics) as the current operational system.
An Efficient Objective Analysis System for Parallel Computers
NASA Technical Reports Server (NTRS)
Stobie, James G.
1999-01-01
A new objective analysis system designed for parallel computers will be described. The system can produce a global analysis (on a 2 x 2.5 lat-lon grid with 20 levels of heights and winds and 10 levels of moisture) using 120,000 observations in less than 3 minutes on 32 CPUs (SGI Origin 2000). No special parallel code is needed (e.g. MPI or multitasking) and the 32 CPUs do not have to be on the same platform. The system Ls totally portable and can run on -several different architectures at once. In addition, the system can easily scale up to 100 or more CPUS. This will allow for much higher resolution and significant increases in input data. The system scales linearly as the number of observations and the number of grid points. The cost overhead in going from I to 32 CPus is 18%. in addition, the analysis results are identical regardless of the number of processors used. T'his system has all the characteristics of optimal interpolation, combining detailed instrument and first guess error statistics to produce the best estimate of the atmospheric state. It also includes a new quality control (buddy check) system. Static tests with the system showed it's analysis increments are comparable to the latest NASA operational system including maintenance of mass-wind balance. Results from a 2-month cycling test in the Goddard EOS Data Assimilation System (GEOS DAS) show this new analysis retains the same level of agreement between the first guess and observations (0-F statistics) throughout the entire two months.
Measured energy savings of an energy-efficient office computer system
Lapujade, P.G.
1995-12-01
Recent surveys have shown that the use of personal computer systems in commercial office buildings is expanding rapidly. In warmer climates, office equipment energy use also has important implications for building cooling loads as well as those directly associated with computing tasks. The U.S. Environmental Protection Agency (EPA) has developed the Energy Star (ES) rating system, intended to endorse more efficient machines. To research the comparative performance of conventional and low-energy computer systems, a test was conducted with the substitution of an ES computer system for the main clerical computer used at a research institution. Separate data on power demand (watts), power factor for the computer/monitor, and power demand for the dedicated laser printer were recorded every 15 minutes to a multichannel datalogger. The current system, a 486DX, 66 MHz computer (8 MB of RAM, and 340 MB hard disk) with a laser printer was monitored for an 86-day period. An ES computer and an ES printer with virtually identical capabilities were then substituted and the changes to power demand and power factor were recorded for an additional 86 days. Computer and printer usage patterns remained essentially constant over the entire monitoring period. The computer user was also interviewed to learn of any perceived shortcomings of the more energy-efficient system. Based on the monitoring, the ES computer system is calculated to produce energy savings of 25.8% (121 kWh) over one year.
Introduction: From Efficient Quantum Computation to Nonextensive Statistical Mechanics
NASA Astrophysics Data System (ADS)
Prosen, Tomaz
These few pages will attempt to make a short comprehensive overview of several contributions to this volume which concern rather diverse topics. I shall review the following works, essentially reversing the sequence indicated in my title: • First, by C. Tsallis on the relation of nonextensive statistics to the stability of quantum motion on the edge of quantum chaos. • Second, the contribution by P. Jizba on information theoretic foundations of generalized (nonextensive) statistics. • Third, the contribution by J. Rafelski on a possible generalization of Boltzmann kinetics, again, formulated in terms of nonextensive statistics. • Fourth, the contribution by D.L. Stein on the state-of-the-art open problems in spin glasses and on the notion of complexity there. • Fifth, the contribution by F.T. Arecchi on the quantum-like uncertainty relations and decoherence appearing in the description of perceptual tasks of the brain. • Sixth, the contribution by G. Casati on the measurement and information extraction in the simulation of complex dynamics by a quantum computer. Immediately, the following question arises: What do the topics of these talks have in common? Apart from the variety of questions they address, it is quite obvious that the common denominator of these contributions is an approach to describe and control "the complexity" by simple means. One of the very useful tools to handle such problems, also often used or at least referred to in several of the works presented here, is the concept of Tsallis entropy and nonextensive statistics.
NASA Astrophysics Data System (ADS)
Senegačnik, Jure; Tavčar, Gregor; Katrašnik, Tomaž
2015-03-01
The paper presents a computationally efficient method for solving the time dependent diffusion equation in a granule of the Li-ion battery's granular solid electrode. The method, called Discrete Temporal Convolution method (DTC), is based on a discrete temporal convolution of the analytical solution of the step function boundary value problem. This approach enables modelling concentration distribution in the granular particles for arbitrary time dependent exchange fluxes that do not need to be known a priori. It is demonstrated in the paper that the proposed method features faster computational times than finite volume/difference methods and Padé approximation at the same accuracy of the results. It is also demonstrated that all three addressed methods feature higher accuracy compared to the quasi-steady polynomial approaches when applied to simulate the current densities variations typical for mobile/automotive applications. The proposed approach can thus be considered as one of the key innovative methods enabling real-time capability of the multi particle electrochemical battery models featuring spatial and temporal resolved particle concentration profiles.
An efficient formulation of robot arm dynamics for control and computer simulation
NASA Astrophysics Data System (ADS)
Lee, C. S. G.; Nigam, R.
This paper describes an efficient formulation of the dynamic equations of motion of industrial robots based on the Lagrange formulation of d'Alembert's principle. This formulation, as applied to a PUMA robot arm, results in a set of closed form second order differential equations with cross product terms. They are not as efficient in computation as those formulated by the Newton-Euler method, but provide a better analytical model for control analysis and computer simulation. Computational complexities of this dynamic model together with other models are tabulated for discussion.
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.
1989-01-01
A computational routine has been created to generate velocity tapers for efficiency enhancement in coupled-cavity TWTs. Programmed into the NASA multidimensional large-signal coupled-cavity TWT computer code, the routine generates the gradually decreasing cavity periods required to maintain a prescribed relationship between the circuit phase velocity and the electron-bunch velocity. Computational results for several computer-generated tapers are compared to those for an existing coupled-cavity TWT with a three-step taper. Guidelines are developed for prescribing the bunch-phase profile to produce a taper for efficiency. The resulting taper provides a calculated RF efficiency 45 percent higher than the step taper at center frequency and at least 37 percent higher over the bandwidth.
Chiang, Patrick
2014-01-31
The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.
NASA Technical Reports Server (NTRS)
Wang, Xiao Yen; Chang, Sin-Chung; Jorgenson, Philip C. E.
1999-01-01
The space-time conservation element and solution element(CE/SE) method is used to study the sound-shock interaction problem. The order of accuracy of numerical schemes is investigated. The linear model problem.govemed by the 1-D scalar convection equation, sound-shock interaction problem governed by the 1-D Euler equations, and the 1-D shock-tube problem which involves moving shock waves and contact surfaces are solved to investigate the order of accuracy of numerical schemes. It is concluded that the accuracy of the CE/SE numerical scheme with designed 2nd-order accuracy becomes 1st order when a moving shock wave exists. However, the absolute error in the CE/SE solution downstream of the shock wave is on the same order as that obtained using a fourth-order accurate essentially nonoscillatory (ENO) scheme. No special techniques are used for either high-frequency low-amplitude waves or shock waves.
Chiampi, M; Zilberti, L
2011-10-01
A computational procedure, based on the boundary element method, has been developed in order to evaluate the electric field induced in a body that moves in the static field around an MRI system. A general approach enables us to investigate rigid translational and rotational movements with any change of motion velocity. The accuracy of the computations is validated by comparison with analytical solutions for simple shaped geometries. Some examples of application of the proposed procedure in the case of motion around an MRI scanner are finally presented.
Ishay, Yakir; Leviatan, Yehuda; Bartal, Guy
2014-05-15
We present a semi-analytical method for computing the electromagnetic field in and around 3D nanoparticles (NP) of complex shape and demonstrate its power via concrete examples of plasmonic NPs that have nonsymmetrical shapes and surface areas with very small radii of curvature. In particular, we show the three axial resonances of a 3D cashew-nut and the broadband response of peanut-shell NPs. The method employs the source-model technique along with a newly developed intricate source distributing algorithm based on the surface curvature. The method is simple and can outperform finite-difference time domain and finite-element-based software tools in both its efficiency and accuracy. PMID:24978226
NASA Astrophysics Data System (ADS)
Jia, Jing; Xu, Gongming; Pei, Xi; Cao, Ruifen; Hu, Liqin; Wu, Yican
2015-03-01
An infrared based positioning and tracking (IPT) system was introduced and its accuracy and efficiency for patient setup and monitoring were tested for daily radiotherapy treatment. The IPT system consists of a pair of floor mounted infrared stereoscopic cameras, passive infrared markers and tools used for acquiring localization information as well as a custom controlled software which can perform the positioning and tracking functions. The evaluation of IPT system characteristics was conducted based on the AAPM 147 task report. Experiments on spatial drift and reproducibility as well as static and dynamic localization accuracy were carried out to test the efficiency of the IPT system. Measurements of known translational (up to 55.0 mm) set-up errors in three dimensions have been performed on a calibration phantom. The accuracy of positioning was evaluated on an anthropomorphic phantom with five markers attached to the surface; the precision of the tracking ability was investigated through a sinusoidal motion platform. For the monitoring of the respiration, three volunteers contributed to the breathing testing in real time. The spatial drift of the IPT system was 0.65 mm within 60 min to be stable. The reproducibility of position variations were between 0.01 and 0.04 mm. The standard deviation of static marker localization was 0.26 mm. The repositioning accuracy was 0.19 mm, 0.29 mm, and 0.53 mm in the left/right (L/R), superior/inferior (S/I) and anterior/posterior (A/P) directions, respectively. The measured dynamic accuracy was 0.57 mm and discrepancies measured for the respiratory motion tracking was better than 1 mm. The overall positioning accuracy of the IPT system was within 2 mm. In conclusion, the IPT system is an accurate and effective tool for assisting patient positioning in the treatment room. The characteristics of the IPT system can successfully meet the needs for real time external marker tracking and patient positioning as well as respiration
Andrews, Keith G; Spivey, Alan C
2013-11-15
The accuracy of both Gauge-including atomic orbital (GIAO) and continuous set of gauge transformations (CSGT) (13)C NMR spectra prediction by Density Functional Theory (DFT) at the B3LYP/6-31G** level is shown to be usefully enhanced by employing a 'fragment referencing' method for predicting chemical shifts without recourse to empirical scaling. Fragment referencing refers to a process of reducing the error in calculating a particular NMR shift by consulting a similar molecule for which the error in the calculation is easily deduced. The absolute accuracy of the chemical shifts predicted when employing fragment referencing relative to conventional techniques (e.g., using TMS or MeOH/benzene dual referencing) is demonstrated to be improved significantly for a range of substrates, which illustrates the superiority of the technique particularly for systems with similar chemical shifts arising from different chemical environments. The technique is particularly suited to molecules of relatively low molecular weight containing 'non-standard' magnetic environments, e.g., α to halogen atoms, which are poorly predicted by other methods. The simplicity and speed of the technique mean that it can be employed to resolve routine structural assignment problems that require a degree of accuracy not provided by standard incremental or hierarchically ordered spherical description of environment (HOSE) algorithms. The approach is also demonstrated to be applicable when employing the MP2 method at 6-31G**, cc-pVDZ, aug-cc-pVDZ, and cc-pVTZ levels, although none of these offer advantage in terms of accuracy of prediction over the B3LYP/6-31G** DFT method.
NASA Astrophysics Data System (ADS)
Chen, Xin; Varley, Martin R.; Shark, Lik-Kwan; Shentall, Glyn S.; Kirby, Mike C.
2008-02-01
The paper presents a computationally efficient 3D-2D image registration algorithm for automatic pre-treatment validation in radiotherapy. The novel aspects of the algorithm include (a) a hybrid cost function based on partial digitally reconstructed radiographs (DRRs) generated along projected anatomical contours and a level set term for similarity measurement; and (b) a fast search method based on parabola fitting and sensitivity-based search order. Using CT and orthogonal x-ray images from a skull and a pelvis phantom, the proposed algorithm is compared with the conventional ray-casting full DRR based registration method. Not only is the algorithm shown to be computationally more efficient with registration time being reduced by a factor of 8, but also the algorithm is shown to offer 50% higher capture range allowing the initial patient displacement up to 15 mm (measured by mean target registration error). For the simulated data, high registration accuracy with average errors of 0.53 mm ± 0.12 mm for translation and 0.61° ± 0.29° for rotation within the capture range has been achieved. For the tested phantom data, the algorithm has also shown to be robust without being affected by artificial markers in the image.
Deng, Nanjie; Zhang, Bin W; Levy, Ronald M
2015-06-01
The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions, and protein–ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ∼3 kcal/mol at only ∼8% of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the implicit/explicit thermodynamic cycle.
NASA Astrophysics Data System (ADS)
Fujita, R.; Hikida, W.; Tagoshi, H.
2009-04-01
We develop a numerical code to compute gravitational waves induced by a particle moving on eccentric inclined orbits around a Kerr black hole. For such systems, the black hole perturbation method is applicable. The gravitational waves can be evaluated by solving the Teukolsky equation with a point like source term, which is computed from the stress-energy tensor of a test particle moving on generic bound geodesic orbits. In our previous papers, we computed the homogeneous solutions of the Teukolsky equation using a formalism developed by Mano, Suzuki and Takasugi and showed that we could compute gravitational waves efficiently and very accurately in the case of circular orbits on the equatorial plane. Here, we apply this method to eccentric inclined orbits. The geodesics around a Kerr black hole have three constants of motion: energy, angular momentum and the Carter constant. We compute the rates of change of the Carter constant as well as those of energy and angular momen tum. This is the first time that the rate of change of the Carter constant has been evaluated accurately. We also treat the case of highly eccentric orbits with e = 0.9. To confirm the accuracy of our codes, several tests are performed. We find that the accuracy is only limited by the truncation of ℓ-, k- and n-modes, where ℓ is the index of the spin-weighted spheroidal harmonics, and n and k are the harmonics of the radial and polar motion, respectively. When we set the maximum of ℓ to 20, we obtain a relative accuracy of 10(-5) even in the highly eccentric case of e = 0.9. The accuracy is better for lower eccentricity. Our numerical code is expected to be useful for computing templates of the extreme mass ratio inspirals, which is one of the main targets of the Laser Interferometer Space Antenna (LISA).
Lee, Wan-Sun; Kim, Woong-Chul
2015-01-01
PURPOSE To assess the marginal and internal gaps of the copings fabricated by computer-aided milling and direct metal laser sintering (DMLS) systems in comparison to casting method. MATERIALS AND METHODS Ten metal copings were fabricated by casting, computer-aided milling, and DMLS. Seven mesiodistal and labiolingual positions were then measured, and each of these were divided into the categories; marginal gap (MG), cervical gap (CG), axial wall at internal gap (AG), and incisal edge at internal gap (IG). Evaluation was performed by a silicone replica technique. A digital microscope was used for measurement of silicone layer. Statistical analyses included one-way and repeated measure ANOVA to test the difference between the fabrication methods and categories of measured points (α=.05), respectively. RESULTS The mean gap differed significantly with fabrication methods (P<.001). Casting produced the narrowest gap in each of the four measured positions, whereas CG, AG, and IG proved narrower in computer-aided milling than in DMLS. Thus, with the exception of MG, all positions exhibited a significant difference between computer-aided milling and DMLS (P<.05). CONCLUSION Although the gap was found to vary with fabrication methods, the marginal and internal gaps of the copings fabricated by computer-aided milling and DMLS fell within the range of clinical acceptance (<120 µm). However, the statistically significant difference to conventional casting indicates that the gaps in computer-aided milling and DMLS fabricated restorations still need to be further reduced. PMID:25932310
NASA Astrophysics Data System (ADS)
Lin, Y.; O'Malley, D.; Vesselinov, V. V.
2015-12-01
Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a
ERIC Educational Resources Information Center
Amiryousefi, Mohammad
2016-01-01
Previous task repetition studies have primarily focused on how task repetition characteristics affect the complexity, accuracy, and fluency in L2 oral production with little attention to L2 written production. The main purpose of the study reported in this paper was to examine the effects of task repetition versus procedural repetition on the…
Computationally Efficient Use of Derivatives in Emulation of Complex Computational Models
Williams, Brian J.; Marcy, Peter W.
2012-06-07
We will investigate the use of derivative information in complex computer model emulation when the correlation function is of the compactly supported Bohman class. To this end, a Gaussian process model similar to that used by Kaufman et al. (2011) is extended to a situation where first partial derivatives in each dimension are calculated at each input site (i.e. using gradients). A simulation study in the ten-dimensional case is conducted to assess the utility of the Bohman correlation function against strictly positive correlation functions when a high degree of sparsity is induced.
Efficient and Flexible Computation of Many-Electron Wave Function Overlaps
2016-01-01
A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented. PMID:26854874
NASA Technical Reports Server (NTRS)
Liu, Yen; Vinokur, Marcel
1989-01-01
This paper treats the accurate and efficient calculation of thermodynamic properties of arbitrary gas mixtures for equilibrium flow computations. New improvements in the Stupochenko-Jaffe model for the calculation of thermodynamic properties of diatomic molecules are presented. A unified formulation of equilibrium calculations for gas mixtures in terms of irreversible entropy is given. Using a highly accurate thermo-chemical data base, a new, efficient and vectorizable search algorithm is used to construct piecewise interpolation procedures with generate accurate thermodynamic variable and their derivatives required by modern computational algorithms. Results are presented for equilibrium air, and compared with those given by the Srinivasan program.
Development of efficient computer program for dynamic simulation of telerobotic manipulation
NASA Technical Reports Server (NTRS)
Chen, J.; Ou, Y. J.
1989-01-01
Research in robot control has generated interest in computationally efficient forms of dynamic equations for multi-body systems. For a simply connected open-loop linkage, dynamic equations arranged in recursive form were found to be particularly efficient. A general computer program capable of simulating an open-loop manipulator with arbitrary number of links has been developed based on an efficient recursive form of Kane's dynamic equations. Also included in the program is some of the important dynamics of the joint drive system, i.e., the rotational effect of the motor rotors. Further efficiency is achieved by the use of symbolic manipulation program to generate the FORTRAN simulation program tailored for a specific manipulator based on the parameter values given. The formulations and the validation of the program are described, and some results are shown.
Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus
2016-01-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922
Energy-Efficient Computational Chemistry: Comparison of x86 and ARM Systems.
Keipert, Kristopher; Mitra, Gaurav; Sunriyal, Vaibhav; Leang, Sarom S; Sosonkina, Masha; Rendell, Alistair P; Gordon, Mark S
2015-11-10
The computational efficiency and energy-to-solution of several applications using the GAMESS quantum chemistry suite of codes is evaluated for 32-bit and 64-bit ARM-based computers, and compared to an x86 machine. The x86 system completes all benchmark computations more quickly than either ARM system and is the best choice to minimize time to solution. The ARM64 and ARM32 computational performances are similar to each other for Hartree-Fock and density functional theory energy calculations. However, for memory-intensive second-order perturbation theory energy and gradient computations the lower ARM32 read/write memory bandwidth results in computation times as much as 86% longer than on the ARM64 system. The ARM32 system is more energy efficient than the x86 and ARM64 CPUs for all benchmarked methods, while the ARM64 CPU is more energy efficient than the x86 CPU for some core counts and molecular sizes.
Jones, Joseph L.; Haluska, Tana L.; Kresch, David L.
2001-01-01
A method of updating flood inundation maps at a fraction of the expense of using traditional methods was piloted in Washington State as part of the U.S. Geological Survey Urban Geologic and Hydrologic Hazards Initiative. Large savings in expense may be achieved by building upon previous Flood Insurance Studies and automating the process of flood delineation with a Geographic Information System (GIS); increases in accuracy and detail result from the use of very-high-accuracy elevation data and automated delineation; and the resulting digital data sets contain valuable ancillary information such as flood depth, as well as greatly facilitating map storage and utility. The method consists of creating stage-discharge relations from the archived output of the existing hydraulic model, using these relations to create updated flood stages for recalculated flood discharges, and using a GIS to automate the map generation process. Many of the effective flood maps were created in the late 1970?s and early 1980?s, and suffer from a number of well recognized deficiencies such as out-of-date or inaccurate estimates of discharges for selected recurrence intervals, changes in basin characteristics, and relatively low quality elevation data used for flood delineation. FEMA estimates that 45 percent of effective maps are over 10 years old (FEMA, 1997). Consequently, Congress has mandated the updating and periodic review of existing maps, which have cost the Nation almost 3 billion (1997) dollars. The need to update maps and the cost of doing so were the primary motivations for piloting a more cost-effective and efficient updating method. New technologies such as Geographic Information Systems and LIDAR (Light Detection and Ranging) elevation mapping are key to improving the efficiency of flood map updating, but they also improve the accuracy, detail, and usefulness of the resulting digital flood maps. GISs produce digital maps without manual estimation of inundated areas between
NASA Astrophysics Data System (ADS)
Zhang, Dong; Zhang, Xiaolei; Yuan, Jianzheng; Ke, Rui; Yang, Yan; Hu, Ying
2016-01-01
The Laplace-Fourier domain full waveform inversion can simultaneously restore both the long and intermediate short-wavelength information of velocity models because of its unique characteristics of complex frequencies. This approach solves the problem of conventional frequency-domain waveform inversion in which the inversion result is excessively dependent on the initial model due to the lack of low frequency information in seismic data. Nevertheless, the Laplace-Fourier domain waveform inversion requires substantial computational resources and long computation time because the inversion must be implemented on different combinations of multiple damping constants and multiple frequencies, namely, the complex frequencies, which are much more numerous than the Fourier frequencies. However, if the entire target model is computed on every complex frequency for the Laplace-Fourier domain inversion (as in the conventional frequency domain inversion), excessively redundant computation will occur. In the Laplace-Fourier domain waveform inversion, the maximum depth penetrated by the seismic wave decreases greatly due to the application of exponential damping to the seismic record, especially with use of a larger damping constant. Thus, the depth of the area effectively inverted on a complex frequency tends to be much less than the model depth. In this paper, we propose a method for quantitative estimation of the effective inversion depth in the Laplace-Fourier domain inversion based on the principle of seismic wave propagation and mathematical analysis. According to the estimated effective inversion depth, we can invert and update only the model area above the effective depth for every complex frequency without loss of accuracy in the final inversion result. Thus, redundant computation is eliminated, and the efficiency of the Laplace-Fourier domain waveform inversion can be improved. The proposed method was tested in numerical experiments. The experimental results show that
Usui, Keisuke; Hara, Naoya; Isobe, Akira; Inoue, Tatsuya; Kurokawa, Chie; Sugimoto, Satoru; Sasai, Keisuke; Ogawa, Kouichi
2016-06-01
To realize the high precision radiotherapy, localized radiation field of the moving target is very important, and visualization of a temporal location of the target can help to improve the accuracy of the target localization. However, conditions of the breathing and the patient's own motion differ from the situation of the treatment planning. Therefore, positions of the tumor are affected by these changes. In this study, we implemented a method to reconstruct target motions obtained with the 4D CBCT using the sorted projection data according to the phase and displacement of the extracorporeal infrared monitor signal, and evaluated the proposed method with a moving phantom. In this method, motion cycles and positions of the marker were sorted to reconstruct the image, and evaluated the image quality affected by changes in the cycle, phase, and positions of the marker. As a result, we realized the visualization of the moving target using the sorted projection data according to the infrared monitor signal. This method was based on the projection binning, in which the signal of the infrared monitor was surrogate of the tumor motion. Thus, further major efforts are needed to ensure the accuracy of the infrared monitor signal.
Computationally efficient scalar nonparaxial modeling of optical wave propagation in the far-field.
Nguyen, Giang-Nam; Heggarty, Kevin; Gérard, Philippe; Serio, Bruno; Meyrueis, Patrick
2014-04-01
We present a scalar model to overcome the computation time and sampling interval limitations of the traditional Rayleigh-Sommerfeld (RS) formula and angular spectrum method in computing wide-angle diffraction in the far-field. Numerical and experimental results show that our proposed method based on an accurate nonparaxial diffraction step onto a hemisphere and a projection onto a plane accurately predicts the observed nonparaxial far-field diffraction pattern, while its calculation time is much lower than the more rigorous RS integral. The results enable a fast and efficient way to compute far-field nonparaxial diffraction when the conventional Fraunhofer pattern fails to predict correctly.
NASA Technical Reports Server (NTRS)
Janetzke, David C.; Murthy, Durbha V.
1991-01-01
Aeroelastic analysis is multi-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic capability on a distributed memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a 3-D unsteady aerodynamic model and a parallel discretization. Efficiencies up to 85 percent were demonstrated using 32 processors. The effect of subtask ordering, problem size, and network topology are presented. A comparison to results on a shared memory computer indicates that higher speedup is achieved on the distributed memory system.
Spin-neurons: A possible path to energy-efficient neuromorphic computers
Sharad, Mrigank; Fan, Deliang; Roy, Kaushik
2013-12-21
Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.
NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)
Not Available
2014-09-01
NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC data center.
Using Neural Net Technology To Enhance the Efficiency of a Computer Adaptive Testing Application.
ERIC Educational Resources Information Center
Van Nelson, C.; Henriksen, Larry W.
The potential for computer adaptive testing (CAT) has been well documented. In order to improve the efficiency of this process, it may be possible to utilize a neural network, or more specifically, a back propagation neural network. The paper asserts that in order to accomplish this end, it must be shown that grouping examinees by ability as…
Framework for computationally efficient optimal irrigation scheduling using ant colony optimization
Technology Transfer Automated Retrieval System (TEKTRAN)
A general optimization framework is introduced with the overall goal of reducing search space size and increasing the computational efficiency of evolutionary algorithm application for optimal irrigation scheduling. The framework achieves this goal by representing the problem in the form of a decisi...
The Improvement of Efficiency in the Numerical Computation of Orbit Trajectories
NASA Technical Reports Server (NTRS)
Dyer, J.; Danchick, R.; Pierce, S.; Haney, R.
1972-01-01
An analysis, system design, programming, and evaluation of results are described for numerical computation of orbit trajectories. Evaluation of generalized methods, interaction of different formulations for satellite motion, transformation of equations of motion and integrator loads, and development of efficient integrators are also considered.
Efficient shortest-path-tree computation in network routing based on pulse-coupled neural networks.
Qu, Hong; Yi, Zhang; Yang, Simon X
2013-06-01
Shortest path tree (SPT) computation is a critical issue for routers using link-state routing protocols, such as the most commonly used open shortest path first and intermediate system to intermediate system. Each router needs to recompute a new SPT rooted from itself whenever a change happens in the link state. Most commercial routers do this computation by deleting the current SPT and building a new one using static algorithms such as the Dijkstra algorithm at the beginning. Such recomputation of an entire SPT is inefficient, which may consume a considerable amount of CPU time and result in a time delay in the network. Some dynamic updating methods using the information in the updated SPT have been proposed in recent years. However, there are still many limitations in those dynamic algorithms. In this paper, a new modified model of pulse-coupled neural networks (M-PCNNs) is proposed for the SPT computation. It is rigorously proved that the proposed model is capable of solving some optimization problems, such as the SPT. A static algorithm is proposed based on the M-PCNNs to compute the SPT efficiently for large-scale problems. In addition, a dynamic algorithm that makes use of the structure of the previously computed SPT is proposed, which significantly improves the efficiency of the algorithm. Simulation results demonstrate the effective and efficient performance of the proposed approach. PMID:23144039
Efficient shortest-path-tree computation in network routing based on pulse-coupled neural networks.
Qu, Hong; Yi, Zhang; Yang, Simon X
2013-06-01
Shortest path tree (SPT) computation is a critical issue for routers using link-state routing protocols, such as the most commonly used open shortest path first and intermediate system to intermediate system. Each router needs to recompute a new SPT rooted from itself whenever a change happens in the link state. Most commercial routers do this computation by deleting the current SPT and building a new one using static algorithms such as the Dijkstra algorithm at the beginning. Such recomputation of an entire SPT is inefficient, which may consume a considerable amount of CPU time and result in a time delay in the network. Some dynamic updating methods using the information in the updated SPT have been proposed in recent years. However, there are still many limitations in those dynamic algorithms. In this paper, a new modified model of pulse-coupled neural networks (M-PCNNs) is proposed for the SPT computation. It is rigorously proved that the proposed model is capable of solving some optimization problems, such as the SPT. A static algorithm is proposed based on the M-PCNNs to compute the SPT efficiently for large-scale problems. In addition, a dynamic algorithm that makes use of the structure of the previously computed SPT is proposed, which significantly improves the efficiency of the algorithm. Simulation results demonstrate the effective and efficient performance of the proposed approach.
Computationally efficient algorithm for Gaussian Process regression in case of structured samples
NASA Astrophysics Data System (ADS)
Belyaev, M.; Burnaev, E.; Kapushev, Y.
2016-04-01
Surrogate modeling is widely used in many engineering problems. Data sets often have Cartesian product structure (for instance factorial design of experiments with missing points). In such case the size of the data set can be very large. Therefore, one of the most popular algorithms for approximation-Gaussian Process regression-can be hardly applied due to its computational complexity. In this paper a computationally efficient approach for constructing Gaussian Process regression in case of data sets with Cartesian product structure is presented. Efficiency is achieved by using a special structure of the data set and operations with tensors. Proposed algorithm has low computational as well as memory complexity compared to existing algorithms. In this work we also introduce a regularization procedure allowing to take into account anisotropy of the data set and avoid degeneracy of regression model.
NASA Astrophysics Data System (ADS)
Yanai, Takeshi; Nakajima, Takahito; Ishikawa, Yasuyuki; Hirao, Kimihiko
2001-04-01
A highly efficient computational scheme for four-component relativistic ab initio molecular orbital (MO) calculations over generally contracted spherical harmonic Gaussian-type spinors (GTSs) is presented. Benchmark calculations for the ground states of the group IB hydrides, MH, and dimers, M2 (M=Cu, Ag, and Au), by the Dirac-Hartree-Fock (DHF) method were performed with a new four-component relativistic ab initio MO program package oriented toward contracted GTSs. The relativistic electron repulsion integrals (ERIs), the major bottleneck in routine DHF calculations, are calculated efficiently employing the fast ERI routine SPHERICA, exploiting the general contraction scheme, and the accompanying coordinate expansion method developed by Ishida. Illustrative calculations clearly show the efficiency of our computational scheme.
Computationally efficient algorithm for high sampling-frequency operation of active noise control
NASA Astrophysics Data System (ADS)
Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati
2015-05-01
In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.
A uniform algebraically-based approach to computational physics and efficient programming
NASA Astrophysics Data System (ADS)
Raynolds, James; Mullin, Lenore
2007-03-01
We present an approach to computational physics in which a common formalism is used both to express the physical problem as well as to describe the underlying details of how computation is realized on arbitrary multiprocessor/memory computer architectures. This formalism is the embodiment of a generalized algebra of multi-dimensional arrays (A Mathematics of Arrays) and an efficient computational implementation is obtained through the composition of of array indices (the psi-calculus) of algorithms defined using matrices, tensors, and arrays in general. The power of this approach arises from the fact that multiple computational steps (e.g. Fourier Transform followed by convolution, etc.) can be algebraically composed and reduced to an simplified expression (i.e. Operational Normal Form), that when directly translated into computer code, can be mathematically proven to be the most efficient implementation with the least number of temporary variables, etc. This approach will be illustrated in the context of a cache-optimized FFT that outperforms or is competitive with established library routines: ESSL, FFTW, IMSL, NAG.
Li, Jung-Hui; Du, Yeh-Ming; Huang, Hsuan-Ming
2015-09-08
The objective of this study was to evaluate the accuracy of dual-energy CT (DECT) for quantifying iodine using a soft tissue-mimicking phantom across various DECT acquisition parameters and dual-source CT (DSCT) scanners. A phantom was constructed with plastic tubes containing soft tissue-mimicking materials with known iodine concentrations (0-20 mg/mL). Experiments were performed on two DSCT scanners, one equipped with an integrated detector and the other with a conventional detector. DECT data were acquired using two DE modes (80 kV/Sn140 kV and 100 kV/Sn140 kV) with four pitch values (0.6, 0.8, 1.0, and 1.2). Images were reconstructed using a soft tissue kernel with and without beam hardening correction (BHC) for iodine. Using the dedicated DE software, iodine concentrations were measured and compared to true concentrations. We also investigated the effect of reducing gantry rotation time on the DECT-based iodine measurement. At iodine concentrations higher than 10 mg/mL, the relative error in measured iodine concentration increased slightly. This error can be decreased by using the kernel with BHC, compared with the kernel without BHC. Both 80 kV/Sn140 kV and 100 kV/Sn140 kV modes could provide accurate quantification of iodine content. Increasing pitch value or reducing gantry rotation time had only a minor impact on the DECT-based iodine measurement. The DSCT scanner, equipped with the new integrated detector, showed more accurate iodine quantification for all iodine concentrations higher than 10 mg/mL. An accurate quantification of iodine can be obtained using the second-generation DSCT scanner in various DE modes with pitch values up to 1.2 and gantry rotation time down to 0.28 s. For iodine concentrations ≥ 10 mg/mL, using the new integrated detector and the kernel with BHC can improve the accuracy of DECT-based iodine measurements.
Norambuena, Tomas; Cares, Jorge F.; Capriotti, Emidio; Melo, Francisco
2013-01-01
Summary: The understanding of the biological role of RNA molecules has changed. Although it is widely accepted that RNAs play important regulatory roles without necessarily coding for proteins, the functions of many of these non-coding RNAs are unknown. Thus, determining or modeling the 3D structure of RNA molecules as well as assessing their accuracy and stability has become of great importance for characterizing their functional activity. Here, we introduce a new web application, WebRASP, that uses knowledge-based potentials for scoring RNA structures based on distance-dependent pairwise atomic interactions. This web server allows the users to upload a structure in PDB format, select several options to visualize the structure and calculate the energy profile. The server contains online help, tutorials and links to other related resources. We believe this server will be a useful tool for predicting and assessing the quality of RNA 3D structures. Availability and implementation: The web server is available at http://melolab.org/webrasp. It has been tested on the most popular web browsers and requires Java plugin for Jmol visualization. Contact: fmelo@bio.puc.cl PMID:23929030
NASA Astrophysics Data System (ADS)
McGroarty, M.; Giblin, S.; Meldrum, D.; Wetterling, F.
2016-04-01
The aim of the study was to perform a preliminary validation of a low cost markerless motion capture system (CAPTURE) against an industry gold standard (Vicon). Measurements of knee valgus and flexion during the performance of a countermovement jump (CMJ) between CAPTURE and Vicon were compared. After correction algorithms were applied to the raw CAPTURE data acceptable levels of accuracy and precision were achieved. The knee flexion angle measured for three trials using Capture deviated by -3.8° ± 3° (left) and 1.7° ± 2.8° (right) compared to Vicon. The findings suggest that low-cost markerless motion capture has potential to provide an objective method for assessing lower limb jump and landing mechanics in an applied sports setting. Furthermore, the outcome of the study warrants the need for future research to examine more fully the potential implications of the use of low-cost markerless motion capture in the evaluation of dynamic movement for injury prevention.
Rajati, Mohsen; Pezeshki Rad, Masoud; Irani, Shirin; Khorsandi, Mohammad Taghi; Motasaddi Zarandy, Masoud
2014-08-01
In this study, high-resolution, multislice computed tomography findings are compared with surgical findings in terms of the fracture location in patients with traumatic facial paralysis. Patients with traumatic facial paralysis with grade VI House-Brackmann scale who met the criteria for surgical decompression between 2008 and 2012 were included in this study. All the patients underwent a multislice high-resolution, multislice computed tomography (HRCT) using 1-mm-thick slices with a bone window algorithm. The anatomical areas of the temporal bone (including the Fallopian canal) were assessed by CT and during the surgery (separately by the radiologist and the surgeon), and fracture line involvement was recorded. Forty-one patients entered this study. The perigeniculate area was the most commonly involved region (46.34 %) of the facial nerve. The sensitivity and specificity of HRCT to detect a fracture line seems to be different in various sites, but the overall sensitivity and specificity were 77.5 and 77.7 %, respectively. Although HRCT is the modality of choice in traumatic facial paralysis, the diagnostic value may differ according to the fracture location. The results of HRCT should be considered with caution in certain areas.
NASA Astrophysics Data System (ADS)
Li, Shijie; Liu, Bingcai; Tian, Ailing; Guo, Zhongda; Yang, Pengfei; Zhang, Jin
2016-02-01
To design a computer-generated hologram (CGH) to measure off-axis aspheric surfaces with high precision, two different design methods are introduced: ray tracing and simulation using the Zemax software program. With ray tracing, after the discrete phase distribution is computed, a B-spline is used to obtain the phase function, and surface intersection is a useful method for determining the CGH fringe positions. In Zemax, the dummy glass method is an effective method for simulating CGH tests. Furthermore, the phase function can also be obtained from the Zernike Fringe Phase. The phase distributions and CGH fringe positions obtained from the two results were compared, and the two methods were determined to be in agreement. Finally, experimental outcomes were determined using the CGH test and autocollimation. The test result (PV=0.309λ, RMS=0.044λ) is the same as that determined by autocollimation (PV=0.330λ, RMS=0.044λ). Further analysis showed that the surface shape distribution and Zernike Fringe polynomial coefficient match well, indicating that the two design methods are correct and consistent and that the CGH test can measure off-axis aspheric surfaces with high precision.
do Couto-Filho, Carlos Eduardo Gomes; de Moraes, Paulo Hemerson; Alonso, Maria Beatriz Carrazzone; Haiter-Neto, Francisco; Olate, Sergio; de Albergaria-Barbosa, José Ricardo
2016-01-01
Summary Dental implant and chin osteotomy are executed on the mandible body and the mental nerve is an important anatomical limit. The aim of this research was to know the position of the mental nerve loop comparing result in panoramic radiography and cone beam computed tomography. We analyzed 94 hemimandibles and the patient sample comprised female and male subjects of ages ranging from 18 to 52 years (mean age, 35 years) selected randomly from the database of patients at the Division of Oral Radiology at Piracicaba Dental School State University of Campinas; the anterior loop (AL) of the mental nerve was evaluated regarding the presence or absence, which was classified as rectilinear or curvilinear and measurement of its length was obtained. The observations were made in the digital panoramic radiography (PR) and the cone beam computed tomography (CBCT) according to a routine technique. The frequencies of the AL identified through PR and CBCT were different: in PR the loop was identified in 42.6% of cases, and only 12.8% were bilateral. In contrast, the AL was detected in 29.8% of the samples using CBCT, with 6.4% being bilateral; Statistical comparison between PR and CBCT showed that the PR led to false-positive diagnosis of the AL in this sample. According to the results of this study, the frequency of AL is low. Thus, it can be assumed that it is not a common condition in this population. PMID:27667898
Cysewski, Piotr; Jeliński, Tomasz
2013-10-01
The electronic spectrum of four different anthraquinones (1,2-dihydroxyanthraquinone, 1-aminoanthraquinone, 2-aminoanthraquinone and 1-amino-2-methylanthraquinone) in methanol solution was measured and used as reference data for theoretical color prediction. The visible part of the spectrum was modeled according to TD-DFT framework with a broad range of DFT functionals. The convoluted theoretical spectra were validated against experimental data by a direct color comparison in terms of CIE XYZ and CIE Lab tristimulus model color. It was found, that the 6-31G** basis set provides the most accurate color prediction and there is no need to extend the basis set since it does not improve the prediction of color. Although different functionals were found to give the most accurate color prediction for different anthraquinones, it is possible to apply the same DFT approach for the whole set of analyzed dyes. Especially three functionals seem to be valuable, namely mPW1LYP, B1LYP and PBE0 due to very similar spectra predictions. The major source of discrepancies between theoretical and experimental spectra comes from L values, representing the lightness, and the a parameter, depicting the position on green→magenta axis. Fortunately, the agreement between computed and observed blue→yellow axis (parameter b) is very precise in the case of studied anthraquinone dyes in methanol solution. Despite discussed shortcomings, color prediction from first principle quantum chemistry computations can lead to quite satisfactory results, expressed in terms of color space parameters.
Schuurman, Michael S; Muir, Steven R; Allen, Wesley D; Schaefer, Henry F
2004-06-22
In continuing pursuit of thermochemical accuracy to the level of 0.1 kcal mol(-1), the heats of formation of NCO, HNCO, HOCN, HCNO, and HONC have been rigorously determined using state-of-the-art ab initio electronic structure theory, including conventional coupled cluster methods [coupled cluster singles and doubles (CCSD), CCSD with perturbative triples (CCSD(T)), and full coupled cluster through triple excitations (CCSDT)] with large basis sets, conjoined in cases with explicitly correlated MP2-R12/A computations. Limits of valence and all-electron correlation energies were extrapolated via focal point analysis using correlation consistent basis sets of the form cc-pVXZ (X=2-6) and cc-pCVXZ (X=2-5), respectively. In order to reach subchemical accuracy targets, core correlation, spin-orbit coupling, special relativity, the diagonal Born-Oppenheimer correction, and anharmonicity in zero-point vibrational energies were accounted for. Various coupled cluster schemes for partially including connected quadruple excitations were also explored, although none of these approaches gave reliable improvements over CCSDT theory. Based on numerous, independent thermochemical paths, each designed to balance residual ab initio errors, our final proposals are DeltaH(f,0) ( composite function )(NCO)=+30.5, DeltaH(f,0) ( composite function )(HNCO)=-27.6, DeltaH(f,0) ( composite function )(HOCN)=-3.1, DeltaH(f,0) ( composite function )(HCNO)=+40.9, and DeltaH(f,0) ( composite function )(HONC)=+56.3 kcal mol(-1). The internal consistency and convergence behavior of the data suggests accuracies of +/-0.2 kcal mol(-1) in these predictions, except perhaps in the HCNO case. However, the possibility of somewhat larger systematic errors cannot be excluded, and the need for CCSDTQ [full coupled cluster through quadruple excitations] computations to eliminate remaining uncertainties is apparent. PMID:15268193
Targeting an efficient target-to-target interval for P300 speller brain–computer interfaces
Sellers, Eric W.; Wang, Xingyu
2013-01-01
Longer target-to-target intervals (TTI) produce greater P300 event-related potential amplitude, which can increase brain–computer interface (BCI) classification accuracy and decrease the number of flashes needed for accurate character classification. However, longer TTIs requires more time for each trial, which will decrease the information transfer rate of BCI. In this paper, a P300 BCI using a 7 × 12 matrix explored new flash patterns (16-, 18- and 21-flash pattern) with different TTIs to assess the effects of TTI on P300 BCI performance. The new flash patterns were designed to minimize TTI, decrease repetition blindness, and examine the temporal relationship between each flash of a given stimulus by placing a minimum of one (16-flash pattern), two (18-flash pattern), or three (21-flash pattern) non-target flashes between each target flashes. Online results showed that the 16-flash pattern yielded the lowest classification accuracy among the three patterns. The results also showed that the 18-flash pattern provides a significantly higher information transfer rate (ITR) than the 21-flash pattern; both patterns provide high ITR and high accuracy for all subjects. PMID:22350331
Li, Mao; Wittek, Adam; Miller, Karol
2014-01-01
Biomechanical modeling methods can be used to predict deformations for medical image registration and particularly, they are very effective for whole-body computed tomography (CT) image registration because differences between the source and target images caused by complex articulated motions and soft tissues deformations are very large. The biomechanics-based image registration method needs to deform the source images using the deformation field predicted by finite element models (FEMs). In practice, the global and local coordinate systems are used in finite element analysis. This involves the transformation of coordinates from the global coordinate system to the local coordinate system when calculating the global coordinates of image voxels for warping images. In this paper, we present an efficient numerical inverse isoparametric mapping algorithm to calculate the local coordinates of arbitrary points within the eight-noded hexahedral finite element. Verification of the algorithm for a nonparallelepiped hexahedral element confirms its accuracy, fast convergence, and efficiency. The algorithm's application in warping of the whole-body CT using the deformation field predicted by means of a biomechanical FEM confirms its reliability in the context of whole-body CT registration. PMID:24828796
Sirin, Y; Guven, K; Horasan, S; Sencan, S
2010-01-01
Objectives The aim of this study was to compare diagnostic accuracy of cone beam CT (CBCT) and multislice CT in artificially created fractures of the sheep mandibular condyle. Methods 63 full-thickness sheep heads were used in this study. Two surgeons created the fractures, which were either displaced or non-displaced. CBCT images were acquired by the NewTom 3G® CBCT scanner (NIM, Verona, Italy) and CT imaging was performed using the Toshiba Aquillon® multislice CT scanner (Toshiba Medical Systems, Otawara, Japan). Two-dimensional (2D) cross-sectional images and three-dimensional (3D) reconstructions were evaluated by two observers who were asked to determine the presence or absence of fracture and displacement, the type of fracture, anatomical localization and type of displacement. The naked-eye inspection during surgery served as the gold standard. Inter- and intra-observer agreements were calculated with weighted kappa statistics. The receiver operating characteristics (ROC) curve analyses were used to compare statistically the area under the curve (AUC) of both imaging modalities. Results Kappa coefficients of intra- and interobserver agreement scores varied between 0.56 – 0.98, which were classified as moderate and excellent, respectively. There was no statistically significant difference between the imaging modalities, which were both sensitive and specific for the diagnosis of sheep condylar fractures. Conclusions This study confirms that CBCT is similar to CT in the diagnosis of different types of experimentally created sheep condylar fractures and can provide a cost- and dose-effective diagnostic option. PMID:20729182
Can computational efficiency alone drive the evolution of modularity in neural networks?
Tosh, Colin R.
2016-01-01
Some biologists have abandoned the idea that computational efficiency in processing multipart tasks or input sets alone drives the evolution of modularity in biological networks. A recent study confirmed that small modular (neural) networks are relatively computationally-inefficient but large modular networks are slightly more efficient than non-modular ones. The present study determines whether these efficiency advantages with network size can drive the evolution of modularity in networks whose connective architecture can evolve. The answer is no, but the reason why is interesting. All simulations (run in a wide variety of parameter states) involving gradualistic connective evolution end in non-modular local attractors. Thus while a high performance modular attractor exists, such regions cannot be reached by gradualistic evolution. Non-gradualistic evolutionary simulations in which multi-modularity is obtained through duplication of existing architecture appear viable. Fundamentally, this study indicates that computational efficiency alone does not drive the evolution of modularity, even in large biological networks, but it may still be a viable mechanism when networks evolve by non-gradualistic means. PMID:27573614
Can computational efficiency alone drive the evolution of modularity in neural networks?
Tosh, Colin R
2016-08-30
Some biologists have abandoned the idea that computational efficiency in processing multipart tasks or input sets alone drives the evolution of modularity in biological networks. A recent study confirmed that small modular (neural) networks are relatively computationally-inefficient but large modular networks are slightly more efficient than non-modular ones. The present study determines whether these efficiency advantages with network size can drive the evolution of modularity in networks whose connective architecture can evolve. The answer is no, but the reason why is interesting. All simulations (run in a wide variety of parameter states) involving gradualistic connective evolution end in non-modular local attractors. Thus while a high performance modular attractor exists, such regions cannot be reached by gradualistic evolution. Non-gradualistic evolutionary simulations in which multi-modularity is obtained through duplication of existing architecture appear viable. Fundamentally, this study indicates that computational efficiency alone does not drive the evolution of modularity, even in large biological networks, but it may still be a viable mechanism when networks evolve by non-gradualistic means.
NASA Astrophysics Data System (ADS)
Joost, William J.
2012-09-01
Transportation accounts for approximately 28% of U.S. energy consumption with the majority of transportation energy derived from petroleum sources. Many technologies such as vehicle electrification, advanced combustion, and advanced fuels can reduce transportation energy consumption by improving the efficiency of cars and trucks. Lightweight materials are another important technology that can improve passenger vehicle fuel efficiency by 6-8% for each 10% reduction in weight while also making electric and alternative vehicles more competitive. Despite the opportunities for improved efficiency, widespread deployment of lightweight materials for automotive structures is hampered by technology gaps most often associated with performance, manufacturability, and cost. In this report, the impact of reduced vehicle weight on energy efficiency is discussed with a particular emphasis on quantitative relationships determined by several researchers. The most promising lightweight materials systems are described along with a brief review of the most significant technical barriers to their implementation. For each material system, the development of accurate material models is critical to support simulation-intensive processing and structural design for vehicles; improved models also contribute to an integrated computational materials engineering (ICME) approach for addressing technical barriers and accelerating deployment. The value of computational techniques is described by considering recent ICME and computational materials science success stories with an emphasis on applying problem-specific methods.
An efficient sparse matrix multiplication scheme for the CYBER 205 computer
NASA Technical Reports Server (NTRS)
Lambiotte, Jules J., Jr.
1988-01-01
This paper describes the development of an efficient algorithm for computing the product of a matrix and vector on a CYBER 205 vector computer. The desire to provide software which allows the user to choose between the often conflicting goals of minimizing central processing unit (CPU) time or storage requirements has led to a diagonal-based algorithm in which one of four types of storage is selected for each diagonal. The candidate storage types employed were chosen to be efficient on the CYBER 205 for diagonals which have nonzero structure which is dense, moderately sparse, very sparse and short, or very sparse and long; however, for many densities, no diagonal type is most efficient with respect to both resource requirements, and a trade-off must be made. For each diagonal, an initialization subroutine estimates the CPU time and storage required for each storage type based on results from previously performed numerical experimentation. These requirements are adjusted by weights provided by the user which reflect the relative importance the user places on the two resources. The adjusted resource requirements are then compared to select the most efficient storage and computational scheme.
Efficient scatter model for simulation of ultrasound images from computed tomography data
NASA Astrophysics Data System (ADS)
D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.
2015-12-01
Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.
NASA Technical Reports Server (NTRS)
Iyer, Venkit
1990-01-01
A solution method, fourth-order accurate in the body-normal direction and second-order accurate in the stream surface directions, to solve the compressible 3-D boundary layer equations is presented. The transformation used, the discretization details, and the solution procedure are described. Ten validation cases of varying complexity are presented and results of calculation given. The results range from subsonic flow to supersonic flow and involve 2-D or 3-D geometries. Applications to laminar flow past wing and fuselage-type bodies are discussed. An interface procedure is used to solve the surface Euler equations with the inviscid flow pressure field as the input to assure accurate boundary conditions at the boundary layer edge. Complete details of the computer program used and information necessary to run each of the test cases are given in the Appendix.
Efficient Computation of Info-Gap Robustness for Finite Element Models
Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.
2012-07-05
A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers an alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.
Clarke, Sarah; Wilson, Marisa L; Terhaar, Mary
2016-01-01
Heart Team meetings are becoming the model of care for patients undergoing transcatheter aortic valve implantations (TAVI) worldwide. While Heart Teams have potential to improve the quality of patient care, the volume of patient data processed during the meeting is large, variable, and comes from different sources. Thus, consolidation is difficult. Also, meetings impose substantial time constraints on the members and financial pressure on the institution. We describe a clinical decision support system (CDSS) designed to assist the experts in treatment selection decisions in the Heart Team. Development of the algorithms and visualization strategy required a multifaceted approach and end-user involvement. An innovative feature is its ability to utilize algorithms to consolidate data and provide clinically useful information to inform the treatment decision. The data are integrated using algorithms and rule-based alert systems to improve efficiency, accuracy, and usability. Future research should focus on determining if this CDSS improves patient selection and patient outcomes. PMID:27332170
Does computer-aided surgical simulation improve efficiency in bimaxillary orthognathic surgery?
Schwartz, H C
2014-05-01
The purpose of this study was to compare the efficiency of bimaxillary orthognathic surgery using computer-aided surgical simulation (CASS), with cases planned using traditional methods. Total doctor time was used to measure efficiency. While costs vary widely in different localities and in different health schemes, time is a valuable and limited resource everywhere. For this reason, total doctor time is a more useful measure of efficiency than is cost. Even though we use CASS primarily for planning more complex cases at the present time, this study showed an average saving of 60min for each case. In the context of a department that performs 200 bimaxillary cases each year, this would represent a saving of 25 days of doctor time, if applied to every case. It is concluded that CASS offers great potential for improving efficiency when used in the planning of bimaxillary orthognathic surgery. It saves significant doctor time that can be applied to additional surgical work.
Seny, Bruno Lambrechts, Jonathan; Toulorge, Thomas; Legat, Vincent; Remacle, Jean-François
2014-01-01
Although explicit time integration schemes require small computational efforts per time step, their efficiency is severely restricted by their stability limits. Indeed, the multi-scale nature of some physical processes combined with highly unstructured meshes can lead some elements to impose a severely small stable time step for a global problem. Multirate methods offer a way to increase the global efficiency by gathering grid cells in appropriate groups under local stability conditions. These methods are well suited to the discontinuous Galerkin framework. The parallelization of the multirate strategy is challenging because grid cells have different workloads. The computational cost is different for each sub-time step depending on the elements involved and a classical partitioning strategy is not adequate any more. In this paper, we propose a solution that makes use of multi-constraint mesh partitioning. It tends to minimize the inter-processor communications, while ensuring that the workload is almost equally shared by every computer core at every stage of the algorithm. Particular attention is given to the simplicity of the parallel multirate algorithm while minimizing computational and communication overheads. Our implementation makes use of the MeTiS library for mesh partitioning and the Message Passing Interface for inter-processor communication. Performance analyses for two and three-dimensional practical applications confirm that multirate methods preserve important computational advantages of explicit methods up to a significant number of processors.
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
NASA Astrophysics Data System (ADS)
Seny, Bruno; Lambrechts, Jonathan; Toulorge, Thomas; Legat, Vincent; Remacle, Jean-François
2014-01-01
Although explicit time integration schemes require small computational efforts per time step, their efficiency is severely restricted by their stability limits. Indeed, the multi-scale nature of some physical processes combined with highly unstructured meshes can lead some elements to impose a severely small stable time step for a global problem. Multirate methods offer a way to increase the global efficiency by gathering grid cells in appropriate groups under local stability conditions. These methods are well suited to the discontinuous Galerkin framework. The parallelization of the multirate strategy is challenging because grid cells have different workloads. The computational cost is different for each sub-time step depending on the elements involved and a classical partitioning strategy is not adequate any more. In this paper, we propose a solution that makes use of multi-constraint mesh partitioning. It tends to minimize the inter-processor communications, while ensuring that the workload is almost equally shared by every computer core at every stage of the algorithm. Particular attention is given to the simplicity of the parallel multirate algorithm while minimizing computational and communication overheads. Our implementation makes use of the MeTiS library for mesh partitioning and the Message Passing Interface for inter-processor communication. Performance analyses for two and three-dimensional practical applications confirm that multirate methods preserve important computational advantages of explicit methods up to a significant number of processors.
Unified commutation-pruning technique for efficient computation of composite DFTs
NASA Astrophysics Data System (ADS)
Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.
2015-12-01
An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with
Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong
2014-01-01
The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752
NASA Astrophysics Data System (ADS)
Song, Bowen; Zhang, Guopeng; Wang, Huafeng; Zhu, Wei; Liang, Zhengrong
2013-02-01
Various types of features, e.g., geometric features, texture features, projection features etc., have been introduced for polyp detection and differentiation tasks via computer aided detection and diagnosis (CAD) for computed tomography colonography (CTC). Although these features together cover more information of the data, some of them are statistically highly-related to others, which made the feature set redundant and burdened the computation task of CAD. In this paper, we proposed a new dimension reduction method which combines hierarchical clustering and principal component analysis (PCA) for false positives (FPs) reduction task. First, we group all the features based on their similarity using hierarchical clustering, and then PCA is employed within each group. Different numbers of principal components are selected from each group to form the final feature set. Support vector machine is used to perform the classification. The results show that when three principal components were chosen from each group we can achieve an area under the curve of receiver operating characteristics of 0.905, which is as high as the original dataset. Meanwhile, the computation time is reduced by 70% and the feature set size is reduce by 77%. It can be concluded that the proposed method captures the most important information of the feature set and the classification accuracy is not affected after the dimension reduction. The result is promising and further investigation, such as automatically threshold setting, are worthwhile and are under progress.
NASA Technical Reports Server (NTRS)
Muellerschoen, R. J.
1988-01-01
A unified method to permute vector stored Upper triangular Diagonal factorized covariance and vector stored upper triangular Square Root Information arrays is presented. The method involves cyclic permutation of the rows and columns of the arrays and retriangularization with fast (slow) Givens rotations (reflections). Minimal computation is performed, and a one dimensional scratch array is required. To make the method efficient for large arrays on a virtual memory machine, computations are arranged so as to avoid expensive paging faults. This method is potentially important for processing large volumes of radio metric data in the Deep Space Network.
do Nascimento, José Hermes Ribas; Soder, Ricardo Bernardi; Epifanio, Matias; Baldisserotto, Matteo
2015-01-01
Objective To compare the accuracy of computer-aided ultrasound (US) and magnetic resonance imaging (MRI) by means of hepatorenal gradient analysis in the evaluation of nonalcoholic fatty liver disease (NAFLD) in adolescents. Materials and Methods This prospective, cross-sectional study evaluated 50 adolescents (aged 11–17 years), including 24 obese and 26 eutrophic individuals. All adolescents underwent computer-aided US, MRI, laboratory tests, and anthropometric evaluation. Sensitivity, specificity, positive and negative predictive values and accuracy were evaluated for both imaging methods, with subsequent generation of the receiver operating characteristic (ROC) curve and calculation of the area under the ROC curve to determine the most appropriate cutoff point for the hepatorenal gradient in order to predict the degree of steatosis, utilizing MRI results as the gold-standard. Results The obese group included 29.2% girls and 70.8% boys, and the eutrophic group, 69.2% girls and 30.8% boys. The prevalence of NAFLD corresponded to 19.2% for the eutrophic group and 83% for the obese group. The ROC curve generated for the hepatorenal gradient with a cutoff point of 13 presented 100% sensitivity and 100% specificity. As the same cutoff point was considered for the eutrophic group, false-positive results were observed in 9.5% of cases (90.5% specificity) and false-negative results in 0% (100% sensitivity). Conclusion Computer-aided US with hepatorenal gradient calculation is a simple and noninvasive technique for semiquantitative evaluation of hepatic echogenicity and could be useful in the follow-up of adolescents with NAFLD, population screening for this disease as well as for clinical studies. PMID:26379321
Efficient path-based computations on pedigree graphs with compact encodings
2012-01-01
A pedigree is a diagram of family relationships, and it is often used to determine the mode of inheritance (dominant, recessive, etc.) of genetic diseases. Along with rapidly growing knowledge of genetics and accumulation of genealogy information, pedigree data is becoming increasingly important. In large pedigree graphs, path-based methods for efficiently computing genealogical measurements, such as inbreeding and kinship coefficients of individuals, depend on efficient identification and processing of paths. In this paper, we propose a new compact path encoding scheme on large pedigrees, accompanied by an efficient algorithm for identifying paths. We demonstrate the utilization of our proposed method by applying it to the inbreeding coefficient computation. We present time and space complexity analysis, and also manifest the efficiency of our method for evaluating inbreeding coefficients as compared to previous methods by experimental results using pedigree graphs with real and synthetic data. Both theoretical and experimental results demonstrate that our method is more scalable and efficient than previous methods in terms of time and space requirements. PMID:22536898
Xiang, H; Hirsch, A; Willins, J; Kachnic, J; Qureshi, M; Katz, M; Nicholas, B; Keohan, S; De Armas, R; Lu, H; Efstathiou, J; Zietman, A
2014-06-01
Purpose: To measure intrafractional prostate motion by time-based stereotactic x-ray imaging and investigate the impact on the accuracy and efficiency of prostate SBRT delivery. Methods: Prostate tracking log files with 1,892 x-ray image registrations from 18 SBRT fractions for 6 patients were retrospectively analyzed. Patient setup and beam delivery sessions were reviewed to identify extended periods of large prostate motion that caused delays in setup or interruptions in beam delivery. The 6D prostate motions were compared to the clinically used PTV margin of 3–5 mm (3 mm posterior, 5 mm all other directions), a hypothetical PTV margin of 2–3 mm (2 mm posterior, 3 mm all other directions), and the rotation correction limits (roll ±2°, pitch ±5° and yaw ±3°) of CyberKnife to quantify beam delivery accuracy. Results: Significant incidents of treatment start delay and beam delivery interruption were observed, mostly related to large pitch rotations of ≥±5°. Optimal setup time of 5–15 minutes was recorded in 61% of the fractions, and optimal beam delivery time of 30–40 minutes in 67% of the fractions. At a default imaging interval of 15 seconds, the percentage of prostate motion beyond PTV margin of 3–5 mm varied among patients, with a mean at 12.8% (range 0.0%–31.1%); and the percentage beyond PTV margin of 2–3 mm was at a mean of 36.0% (range 3.3%–83.1%). These timely detected offsets were all corrected real-time by the robotic manipulator or by operator intervention at the time of treatment interruptions. Conclusion: The durations of patient setup and beam delivery were directly affected by the occurrence of large prostate motion. Frequent imaging of down to 15 second interval is necessary for certain patients. Techniques for reducing prostate motion, such as using endorectal balloon, can be considered to assure consistently higher accuracy and efficiency of prostate SBRT delivery.
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-08-19
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~101 to ~102 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
An improvement to computational efficiency of the drain current model for double-gate MOSFET
NASA Astrophysics Data System (ADS)
Zhou, Xing-Ye; Zhang, Jian; Zhou, Zhi-Ze; Zhang, Li-Ning; Ma, Chen-Yue; Wu, Wen; Zhao, Wei; Zhang, Xing
2011-09-01
As a connection between the process and the circuit design, the device model is greatly desired for emerging devices, such as the double-gate MOSFET. Time efficiency is one of the most important requirements for device modeling. In this paper, an improvement to the computational efficiency of the drain current model for double-gate MOSFETs is extended, and different calculation methods are compared and discussed. The results show that the calculation speed of the improved model is substantially enhanced. A two-dimensional device simulation is performed to verify the improved model. Furthermore, the model is implemented into the HSPICE circuit simulator in Verilog-A for practical application.
NASA Astrophysics Data System (ADS)
Kim, Dong Wook; Bae, Sunhyun; Chung, Weon Kuu; Lee, Yoonhee
2014-04-01
Cone-beam computed tomography (CBCT) images are currently used for patient positioning and adaptive dose calculation; however, the degree of CBCT uncertainty in cases of respiratory motion remains an interesting issue. This study evaluated the uncertainty of CBCT-based dose calculations for a moving target. Using a phantom, we estimated differences in the geometries and the Hounsfield units (HU) between CT and CBCT. The calculated dose distributions based on CT and CBCT images were also compared using a radiation treatment planning system, and the comparison included cases with respiratory motion. The geometrical uncertainties of the CT and the CBCT images were less than 0.15 cm. The HU differences between CT and CBCT images for standard-dose-head, high-quality-head, normal-pelvis, and low-dose-thorax modes were 31, 36, 23, and 33 HU, respectively. The gamma (3%, 0.3 cm)-dose distribution between CT and CBCT was greater than 1 in 99% of the area. The gamma-dose distribution between CT and CBCT during respiratory motion was also greater than 1 in 99% of the area. The uncertainty of the CBCT-based dose calculation was evaluated for cases with respiratory motion. In conclusion, image distortion due to motion did not significantly influence dosimetric parameters.
A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.
NASA Astrophysics Data System (ADS)
Wehner, M. F.; Oliker, L.; Shalf, J.
2008-12-01
Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.
Efficient high-fidelity quantum computation using matter qubits and linear optics
Barrett, Sean D.; Kok, Pieter
2005-06-15
We propose a practical, scalable, and efficient scheme for quantum computation using spatially separated matter qubits and single-photon interference effects. The qubit systems can be nitrogen-vacancy centers in diamond, Pauli-blockade quantum dots with an excess electron, or trapped ions with optical transitions, which are each placed in a cavity and subsequently entangled using a double-heralded single-photon detection scheme. The fidelity of the resulting entanglement is extremely robust against the most important errors such as detector loss, spontaneous emission, and mismatch of cavity parameters. We demonstrate how this entangling operation can be used to efficiently generate cluster states of many qubits, which, together with single-qubit operations and readout, can be used to implement universal quantum computation. Existing experimental parameters indicate that high-fidelity clusters can be generated with a moderate constant overhead.
Step-by-step magic state encoding for efficient fault-tolerant quantum computation.
Goto, Hayato
2014-12-16
Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.
Mitchell, Scott A.; Ebeida, Mohamed Salah; Romero, Vicente J.; Swiler, Laura Painton; Rushdi, Ahmad A.; Abdelkader, Ahmad
2015-09-01
This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.
An efficient surrogate-based method for computing rare failure probability
NASA Astrophysics Data System (ADS)
Li, Jing; Li, Jinglai; Xiu, Dongbin
2011-10-01
In this paper, we present an efficient numerical method for evaluating rare failure probability. The method is based on a recently developed surrogate-based method from Li and Xiu [J. Li, D. Xiu, Evaluation of failure probability via surrogate models, J. Comput. Phys. 229 (2010) 8966-8980] for failure probability computation. The method by Li and Xiu is of hybrid nature, in the sense that samples of both the surrogate model and the true physical model are used, and its efficiency gain relies on using only very few samples of the true model. Here we extend the capability of the method to rare probability computation by using the idea of importance sampling (IS). In particular, we employ cross-entropy (CE) method, which is an effective method to determine the biasing distribution in IS. We demonstrate that, by combining with the CE method, a surrogate-based IS algorithm can be constructed and is highly efficient for rare failure probability computation—it incurs much reduced simulation efforts compared to the traditional CE-IS method. In many cases, the new method is capable of capturing failure probability as small as 10 -12 ˜ 10 -6 with only several hundreds samples.
NASA Astrophysics Data System (ADS)
Liu, C. P.
1997-07-01
An effective design structure for 2-D analysis/synthesis filter banks with high computational efficiency are proposed. The system involves a 2-D single-sideband (SSB) system, which is developed in terms of a 2-D separable weighted overlap-add (OLA) method of analysis/synthesis and enables overlap between adjacent spatial domain windows. This implies that spatial domain aliasing introduced in the analysis is canceled in the synthesis process and provides perfect reconstruction. Achieving perfect reconstruction, each individual analysis/synthesis filter bank with SSB modulation is satisfied to be a cosine modulated version of a common baseband filter. Since a cosine-modulated structure is imposed in the design procedure, the system can reduce the number of parameters required to achieve the best computational efficiency. It can be shown that the resulting cosine- modulated filters are very efficient in terms of computational complexity and are relatively easy to design. Moreover, the design approach can be imposed on the system with relatively low reconstruction delays.
Redundancy management for efficient fault recovery in NASA's distributed computing system
NASA Technical Reports Server (NTRS)
Malek, Miroslaw; Pandya, Mihir; Yau, Kitty
1991-01-01
The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.
Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits.
Ujfalussy, Balázs B; Makara, Judit K; Branco, Tiago; Lengyel, Máté
2015-01-01
Cortical neurons integrate thousands of synaptic inputs in their dendrites in highly nonlinear ways. It is unknown how these dendritic nonlinearities in individual cells contribute to computations at the level of neural circuits. Here, we show that dendritic nonlinearities are critical for the efficient integration of synaptic inputs in circuits performing analog computations with spiking neurons. We developed a theory that formalizes how a neuron's dendritic nonlinearity that is optimal for integrating synaptic inputs depends on the statistics of its presynaptic activity patterns. Based on their in vivo preynaptic population statistics (firing rates, membrane potential fluctuations, and correlations due to ensemble dynamics), our theory accurately predicted the responses of two different types of cortical pyramidal cells to patterned stimulation by two-photon glutamate uncaging. These results reveal a new computational principle underlying dendritic integration in cortical neurons by suggesting a functional link between cellular and systems--level properties of cortical circuits. PMID:26705334
Efficient computation of the stability of three-dimensional compressible boundary layers
NASA Technical Reports Server (NTRS)
Malik, M. R.; Orszag, S. A.
1981-01-01
Methods for the computer analysis of the stability of three-dimensional compressible boundary layers are discussed and the user-oriented Compressible Stability Analysis (COSAL) computer code is described. The COSAL code uses a matrix finite-difference method for local eigenvalue solution when a good guess for the eigenvalue is available and is significantly more computationally efficient than the commonly used initial-value approach. The local eigenvalue search procedure also results in eigenfunctions and, at little extra work, group velocities. A globally convergent eigenvalue procedure is also developed which may be used when no guess for the eigenvalue is available. The global problem is formulated in such a way that no unstable spurious modes appear so that the method is suitable for use in a black-box stability code. Sample stability calculations are presented for the boundary layer profiles of an LFC swept wing.
An efficient FPGA architecture for integer ƞth root computation
NASA Astrophysics Data System (ADS)
Rangel-Valdez, Nelson; Barron-Zambrano, Jose Hugo; Torres-Huitzil, Cesar; Torres-Jimenez, Jose
2015-10-01
In embedded computing, it is common to find applications such as signal processing, image processing, computer graphics or data compression that might benefit from hardware implementation for the computation of integer roots of order ?. However, the scientific literature lacks architectural designs that implement such operations for different values of N, using a low amount of resources. This article presents a parameterisable field programmable gate array (FPGA) architecture for an efficient Nth root calculator that uses only adders/subtractors and ? location memory elements. The architecture was tested for different values of ?, using 64-bit number representation. The results show a consumption up to 10% of the logical resources of a Xilinx XC6SLX45-CSG324C device, depending on the value of N. The hardware implementation improved the performance of its corresponding software implementations in one order of magnitude. The architecture performance varies from several thousands to seven millions of root operations per second.
A specialized ODE integrator for the efficient computation of parameter sensitivities
2012-01-01
Background Dynamic mathematical models in the form of systems of ordinary differential equations (ODEs) play an important role in systems biology. For any sufficiently complex model, the speed and accuracy of solving the ODEs by numerical integration is critical. This applies especially to systems identification problems where the parameter sensitivities must be integrated alongside the system variables. Although several very good general purpose ODE solvers exist, few of them compute the parameter sensitivities automatically. Results We present a novel integration algorithm that is based on second derivatives and contains other unique features such as improved error estimates. These features allow the integrator to take larger time steps than other methods. In practical applications, i.e. systems biology models of different sizes and behaviors, the method competes well with established integrators in solving the system equations, and it outperforms them significantly when local parameter sensitivities are evaluated. For ease-of-use, the solver is embedded in a framework that automatically generates the integrator input from an SBML description of the system of interest. Conclusions For future applications, comparatively ‘cheap’ parameter sensitivities will enable advances in solving large, otherwise computationally expensive parameter estimation and optimization problems. More generally, we argue that substantially better computational performance can be achieved by exploiting characteristics specific to the problem domain; elements of our methods such as the error estimation could find broader use in other, more general numerical algorithms. PMID:22607742
Tamam, Cuneyt; Tamam, Muge; Mulazimoglu, Mehmet
2016-01-01
The aim of the current study was to determine the diagnostic accuracy of whole-body fluorine-18-fluorodeoxyglucose positron emission tomography/computed tomography (FDG-PET/CT) in detecting carcinoma of unknown primary (CUP) with bone metastases. We evaluated 87 patients who were referred to FDG-PET/CT imaging and reported to have skeletal lesions with suspicion of malignancy. The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were calculated. The median survival rate was measured to evaluate the prognostic value of the FDG-PET/CT findings. In the search for a primary, FDG-PET/CT findings correctly diagnosed lesions as the site of the primary true positive (TP) in 64 (73%) cases, 4 (5%) findings diagnosed no site of a primary, and none were subsequently proven to be true negative (TN); 14 (16%) diagnoses were false positive (FP) and 5 (6%) diagnoses were false negative (FN). Life expectancy was between 2 months and 25 months. Whole-body FDG-PET/CT imaging may be a useful method in assessing the bone lesions with suspicion of bone metastases. PMID:27134563
NASA Astrophysics Data System (ADS)
Townley, Lloyd R.; Wilson, John L.
1985-12-01
Finite difference and finite element methods are frequently used to study aquifer flow; however, additional analysis is required when model parameters, and hence predicted heads are uncertain. Computational algorithms are presented for steady and transient models in which aquifer storage coefficients, transmissivities, distributed inputs, and boundary values may all be simultaneously uncertain. Innovative aspects of these algorithms include a new form of generalized boundary condition; a concise discrete derivation of the adjoint problem for transient models with variable time steps; an efficient technique for calculating the approximate second derivative during line searches in weighted least squares estimation; and a new efficient first-order second-moment algorithm for calculating the covariance of predicted heads due to a large number of uncertain parameter values. The techniques are presented in matrix form, and their efficiency depends on the structure of sparse matrices which occur repeatedly throughout the calculations. Details of matrix structures are provided for a two-dimensional linear triangular finite element model.
Xie, Cheng; Gnanasegaran, Gopinath; Mohan, Hosahalli; Livieratos, Lefteris
2013-01-01
Single photon emission computed tomography (SPECT) and computed tomography (CT) integrated in one system (SPECT/CT) is an effective co-registration technique that helps to localize and characterize lesions in the hand and wrist. However, patient motion may cause misalignment between the two modalities leading to potential misdiagnosis. The aim of the present study was to evaluate the hardware-based registration accuracy of multislice SPECT/CT of the hand and wrist and to determine the effect of misalignment errors on diagnostic accuracy. A total of 55 patients who had multislice SPECT/CT of the hand and wrist between July 2008 and January 2010 were included. Two reviewers independently evaluated the fused images for any misalignments with six degrees of freedom: Translation and rotation in the X, Y and Z directions. The results were tested against an automated fusion tool (Syntegra). More than half of the patients had moved during SPECT scanning (Reviewer 1: 29 patients; Reviewer 2: 30 patients) and they all originated in the Y-direction translation (vertical hand motion). Five fused images had significant misalignment errors that could have led to misdiagnosis. The Wilcoxon test indicated statistically non-significant difference (P > 0.05) between reviewers and statistically non-significant difference between the reviewers and software registration. The study also showed high inter-reviewer agreement (κ = 0.87). Hand movement during the SPECT scan was common, but significant misalignments and subsequent misdiagnosis were infrequent. Future studies should investigate the use of hand and wrist immobilization devices and reductions of scan time to minimize patient motion. PMID:25214811
Computationally-Efficient Minimum-Time Aircraft Routes in the Presence of Winds
NASA Technical Reports Server (NTRS)
Jardin, Matthew R.
2004-01-01
A computationally efficient algorithm for minimizing the flight time of an aircraft in a variable wind field has been invented. The algorithm, referred to as Neighboring Optimal Wind Routing (NOWR), is based upon neighboring-optimal-control (NOC) concepts and achieves minimum-time paths by adjusting aircraft heading according to wind conditions at an arbitrary number of wind measurement points along the flight route. The NOWR algorithm may either be used in a fast-time mode to compute minimum- time routes prior to flight, or may be used in a feedback mode to adjust aircraft heading in real-time. By traveling minimum-time routes instead of direct great-circle (direct) routes, flights across the United States can save an average of about 7 minutes, and as much as one hour of flight time during periods of strong jet-stream winds. The neighboring optimal routes computed via the NOWR technique have been shown to be within 1.5 percent of the absolute minimum-time routes for flights across the continental United States. On a typical 450-MHz Sun Ultra workstation, the NOWR algorithm produces complete minimum-time routes in less than 40 milliseconds. This corresponds to a rate of 25 optimal routes per second. The closest comparable optimization technique runs approximately 10 times slower. Airlines currently use various trial-and-error search techniques to determine which of a set of commonly traveled routes will minimize flight time. These algorithms are too computationally expensive for use in real-time systems, or in systems where many optimal routes need to be computed in a short amount of time. Instead of operating in real-time, airlines will typically plan a trajectory several hours in advance using wind forecasts. If winds change significantly from forecasts, the resulting flights will no longer be minimum-time. The need for a computationally efficient wind-optimal routing algorithm is even greater in the case of new air-traffic-control automation concepts. For air
NASA Astrophysics Data System (ADS)
Schaefer, Bastian; Goedecker, Stefan
2016-07-01
An analysis of the network defined by the potential energy minima of multi-atomic systems and their connectivity via reaction pathways that go through transition states allows us to understand important characteristics like thermodynamic, dynamic, and structural properties. Unfortunately computing the transition states and reaction pathways in addition to the significant energetically low-lying local minima is a computationally demanding task. We here introduce a computationally efficient method that is based on a combination of the minima hopping global optimization method and the insight that uphill barriers tend to increase with increasing structural distances of the educt and product states. This method allows us to replace the exact connectivity information and transition state energies with alternative and approximate concepts. Without adding any significant additional cost to the minima hopping global optimization approach, this method allows us to generate an approximate network of the minima, their connectivity, and a rough measure for the energy needed for their interconversion. This can be used to obtain a first qualitative idea on important physical and chemical properties by means of a disconnectivity graph analysis. Besides the physical insight obtained by such an analysis, the gained knowledge can be used to make a decision if it is worthwhile or not to invest computational resources for an exact computation of the transition states and the reaction pathways. Furthermore it is demonstrated that the here presented method can be used for finding physically reasonable interconversion pathways that are promising input pathways for methods like transition path sampling or discrete path sampling.
An efficient method for computing high PT elascticity by first principles
NASA Astrophysics Data System (ADS)
Wu, Z.; Wentzcovitch, R. M.
2007-12-01
First principles quasiharmonic (QHA) free energy computations play a very important role in mineral physics because they can predict accurately the structure and thermodynamic properties of materials at pressure and temperature conditions that are still challenging for experiments. They also enable calculations of thermoelastic properties by obtaining the second derivatives of the free energies with respect to Lagrangian strains. However, these are demanding computations requiring 100 to 1000 medium size jobs. Here we introduce and test an approximate method that requires only calculations of static elastic constants, phonon VDOS, and mode Gruneisen parameters for unstrained configurations. This approach is computationally efficient and decreases the computational time by more than one order of magnitude. The human workload is also reduced substantially. We test this approach by computing high PT elasticity of MgO and forsterite. We show one can obtain very good agreement with full first principles results and experimental data. Research supported by NSF/EAR, NSF/ITR (VLab), and MSI (U of MN)
NASA Technical Reports Server (NTRS)
Seltzer, S. M.
1974-01-01
Some means of combining both computer simulation and anlytical techniques are indicated in order to mutually enhance their efficiency as design tools and to motivate those involved in engineering design to consider using such combinations. While the idea is not new, heavy reliance on computers often seems to overshadow the potential utility of analytical tools. Although the example used is drawn from the area of dynamics and control, the principles espoused are applicable to other fields. In the example the parameter plane stability analysis technique is described briefly and extended beyond that reported in the literature to increase its utility (through a simple set of recursive formulas) and its applicability (through the portrayal of the effect of varying the sampling period of the computer). The numerical values that were rapidly selected by analysis were found to be correct for the hybrid computer simulation for which they were needed. This obviated the need for cut-and-try methods to choose the numerical values, thereby saving both time and computer utilization.
An Efficient Computational Approach for the Calculation of the Vibrational Density of States.
Aieta, Chiara; Gabas, Fabio; Ceotto, Michele
2016-07-14
We present an optimized approach for the calculation of the density of fully coupled vibrational states in high-dimensional systems. This task is of paramount importance, because partition functions and several thermodynamic properties can be accurately estimated once the density of states is known. A new code, called paradensum, based on the implementation of the Wang-Landau Monte Carlo algorithm for parallel architectures is described and applied to real complex systems. We test the accuracy of paradensum on several molecular systems, including some benchmarks for which an exact evaluation of the vibrational density of states is doable by direct counting. In addition, we find a significant computational speedup with respect to standard approaches when applying our code to molecules up to 66 degrees of freedom. The new code can easily handle 150 degrees of freedom. These features make paradensum a very promising tool for future calculations of thermodynamic properties and thermal rate constants of complex systems. PMID:26840098
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chen, C. L.
1989-01-01
Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.
Park, Won Young; Phadke, Amol; Shah, Nihar
2012-06-29
Displays account for a significant portion of electricity consumed in personal computer (PC) use, and global PC monitor shipments are expected to continue to increase. We assess the market trends in the energy efficiency of PC monitors that are likely to occur without any additional policy intervention and estimate that display efficiency will likely improve by over 40% by 2015 compared to today’s technology. We evaluate the cost effectiveness of a key technology which further improves efficiency beyond this level by at least 20% and find that its adoption is cost effective. We assess the potential for further improving efficiency taking into account the recent development of universal serial bus (USB) powered liquid crystal display (LCD) monitors and find that the current technology available and deployed in USB powered monitors has the potential to deeply reduce energy consumption by as much as 50%. We provide insights for policies and programs that can be used to accelerate the adoption of efficient technologies to capture global energy saving potential from PC monitors which we estimate to be 9.2 terawatt-hours [TWh] per year in 2015.
Lunnoo, Thodsaphon; Puangmali, Theerapong
2015-12-01
The primary limitation of magnetic drug targeting (MDT) relates to the strength of an external magnetic field which decreases with increasing distance. Small nanoparticles (NPs) displaying superparamagnetic behaviour are also required in order to reduce embolization in the blood vessel. The small NPs, however, make it difficult to vector NPs and keep them in the desired location. The aims of this work were to investigate parameters influencing the capture efficiency of the drug carriers in mimicked arterial flow. In this work, we computationally modelled and evaluated capture efficiency in MDT with COMSOL Multiphysics 4.4. The studied parameters were (i) magnetic nanoparticle size, (ii) three classes of magnetic cores (Fe3O4, Fe2O3, and Fe), and (iii) the thickness of biocompatible coating materials (Au, SiO2, and PEG). It was found that the capture efficiency of small particles decreased with decreasing size and was less than 5 % for magnetic particles in the superparamagnetic regime. The thickness of non-magnetic coating materials did not significantly influence the capture efficiency of MDT. It was difficult to capture small drug carriers (D<200 nm) in the arterial flow. We suggest that the MDT with high-capture efficiency can be obtained in small vessels and low-blood velocities such as micro-capillary vessels. PMID:26515074
Pyakuryal, Anil; Myint, W. Kenji; Gopalakrishnan, Mahesh; Jang, Sunyoung; Logemann, Jerilyn A.; Mittal, Bharat B.
2010-01-01
A Histogram Analysis in Radiation Therapy (HART) program was primarily developed to increase the efficiency and accuracy of dose–volume histogram (DVH) analysis of large quantities of patient data in radiation therapy research. The program was written in MATLAB to analyze patient plans exported from the treatment planning system (Pinnacle3) in the American Association of Physicists in Medicine/Radiation Therapy Oncology Group (AAPM/RTOG) format. HART-computed DVH data was validated against manually extracted data from the planning system for five head and neck cancer patients treated with the intensity-modulated radiation therapy (IMRT) technique. HART calculated over 4000 parameters from the differential DVH (dDVH) curves for each patient in approximately 10–15 minutes. Manual extraction of this amount of data required 5 to 6 hours. The normalized root mean square deviation (NRMSD) for the HART–extracted DVH outcomes was less than 1%, or within 0.5% distance-to-agreement (DTA). This tool is supported with various user-friendly options and graphical displays. Additional features include optimal polynomial modeling of DVH curves for organs, treatment plan indices (TPI) evaluation, plan-specific outcome analysis (POA), and spatial DVH (zDVH) and dose surface histogram (DSH) analyses, respectively. HART is freely available to the radiation oncology community. PMID:20160690
Sillanpaa, Jussi; Chang Jenghwa; Mageras, Gikas; Yorke, Ellen; Arruda, Fernando De; Rosenzweig, Kenneth E.; Munro, Peter; Seppi, Edward; Pavkovich, John; Amols, Howard
2006-09-15
We report on the capabilities of a low-dose megavoltage cone-beam computed tomography (MV CBCT) system. The high-efficiency image receptor consists of a photodiode array coupled to a scintillator composed of individual CsI crystals. The CBCT system uses the 6 MV beam from a linear accelerator. A synchronization circuit allows us to limit the exposure to one beam pulse [0.028 monitor units (MU)] per projection image. 150-500 images (4.2-13.9 MU total) are collected during a one-minute scan and reconstructed using a filtered backprojection algorithm. Anthropomorphic and contrast phantoms are imaged and the contrast-to-noise ratio of the reconstruction is studied as a function of the number of projections and the error in the projection angles. The detector dose response is linear (R{sup 2} value 0.9989). A 2% electron density difference is discernible using 460 projection images and a total exposure of 13 MU (corresponding to a maximum absorbed dose of about 12 cGy in a patient). We present first patient images acquired with this system. Tumors in lung are clearly visible and skeletal anatomy is observed in sufficient detail to allow reproducible registration with the planning kV CT images. The MV CBCT system is shown to be capable of obtaining good quality three-dimensional reconstructions at relatively low dose and to be clinically usable for improving the accuracy of radiotherapy patient positioning.
Anthony, T. Renée
2013-01-01
Computational fluid dynamics (CFD) has been used to report particle inhalability in low velocity freestreams, where realistic faces but simplified, truncated, and cylindrical human torsos were used. When compared to wind tunnel velocity studies, the truncated models were found to underestimate the air’s upward velocity near the humans, raising questions about aspiration estimation. This work compares aspiration efficiencies for particles ranging from 7 to 116 µm using three torso geometries: (i) a simplified truncated cylinder, (ii) a non-truncated cylinder, and (iii) an anthropometrically realistic humanoid body. The primary aim of this work is to (i) quantify the errors introduced by using a simplified geometry and (ii) determine the required level of detail to adequately represent a human form in CFD studies of aspiration efficiency. Fluid simulations used the standard k-epsilon turbulence models, with freestream velocities at 0.1, 0.2, and 0.4 m s−1 and breathing velocities at 1.81 and 12.11 m s−1 to represent at-rest and heavy breathing rates, respectively. Laminar particle trajectory simulations were used to determine the upstream area, also known as the critical area, where particles would be inhaled. These areas were used to compute aspiration efficiencies for facing the wind. Significant differences were found in both vertical velocity estimates and the location of the critical area between the three models. However, differences in aspiration efficiencies between the three forms were <8.8% over all particle sizes, indicating that there is little difference in aspiration efficiency between torso models. PMID:23006817
Kossert, K; Cassette, Ph; Carles, A Grau; Jörg, G; Gostomski, Christroph Lierse V; Nähle, O; Wolf, Ch
2014-05-01
The triple-to-double coincidence ratio (TDCR) method is frequently used to measure the activity of radionuclides decaying by pure β emission or electron capture (EC). Some radionuclides with more complex decays have also been studied, but accurate calculations of decay branches which are accompanied by many coincident γ transitions have not yet been investigated. This paper describes recent extensions of the model to make efficiency computations for more complex decay schemes possible. In particular, the MICELLE2 program that applies a stochastic approach of the free parameter model was extended. With an improved code, efficiencies for β(-), β(+) and EC branches with up to seven coincident γ transitions can be calculated. Moreover, a new parametrization for the computation of electron stopping powers has been implemented to compute the ionization quenching function of 10 commercial scintillation cocktails. In order to demonstrate the capabilities of the TDCR method, the following radionuclides are discussed: (166m)Ho (complex β(-)/γ), (59)Fe (complex β(-)/γ), (64)Cu (β(-), β(+), EC and EC/γ) and (229)Th in equilibrium with its progenies (decay chain with many α, β and complex β(-)/γ transitions).
Hinnen, Deborah A; Buskirk, Ann; Lyden, Maureen; Amstutz, Linda; Hunter, Tracy; Parkin, Christopher G; Wagner, Robin
2015-03-01
We assessed users' proficiency and efficiency in identifying and interpreting self-monitored blood glucose (SMBG), insulin, and carbohydrate intake data using data management software reports compared with standard logbooks. This prospective, self-controlled, randomized study enrolled insulin-treated patients with diabetes (PWDs) (continuous subcutaneous insulin infusion [CSII] and multiple daily insulin injection [MDI] therapy), patient caregivers [CGVs]) and health care providers (HCPs) who were naïve to diabetes data management computer software. Six paired clinical cases (3 CSII, 3 MDI) and associated multiple-choice questions/answers were reviewed by diabetes specialists and presented to participants via a web portal in both software report (SR) and traditional logbook (TL) formats. Participant response time and accuracy were documented and assessed. Participants completed a preference questionnaire at study completion. All participants (54 PWDs, 24 CGVs, 33 HCPs) completed the cases. Participants achieved greater accuracy (assessed by percentage of accurate answers) using the SR versus TL formats: PWDs, 80.3 (13.2)% versus 63.7 (15.0)%, P < .0001; CGVs, 84.6 (8.9)% versus 63.6 (14.4)%, P < .0001; HCPs, 89.5 (8.0)% versus 66.4 (12.3)%, P < .0001. Participants spent less time (minutes) with each case using the SR versus TL formats: PWDs, 8.6 (4.3) versus 19.9 (12.2), P < .0001; CGVs, 7.0 (3.5) versus 15.5 (11.8), P = .0005; HCPs, 6.7 (2.9) versus 16.0 (12.0), P < .0001. The majority of participants preferred using the software reports versus logbook data. Use of the Accu-Chek Connect Online software reports enabled PWDs, CGVs, and HCPs, naïve to diabetes data management software, to identify and utilize key diabetes information with significantly greater accuracy and efficiency compared with traditional logbook information. Use of SRs was preferred over logbooks.
Wu, Chao-Chin; Lai, Lien-Fu; Gromiha, M Michael; Huang, Liang-Tsung
2014-01-01
Predicting protein stability change upon mutation is important for protein design. Although several methods have been proposed to improve prediction accuracy it will be difficult to employ those methods when the required input information is incomplete. In this work, we integrated a fuzzy query model based on the knowledge-based approach to overcome this problem, and then we proposed a high throughput computing method based on parallel technologies in emerging cluster or grid systems to discriminate stability change. To improve the load balance of heterogeneous computing power in cluster and grid nodes, a variety of self-scheduling schemes have been implemented. Further, we have tested the method by performing different analyses and the results showed that the present method can process hundreds of predication queries in more reasonable response time and perform a super linear speedup to a maximum of 86.2 times. We have also established a website tool to implement the proposed method and it is available at http://bioinformatics.myweb.hinet.net/para.htm.
A stitch in time: Efficient computation of genomic DNA melting bubbles
Tøstesen, Eivind
2008-01-01
Background It is of biological interest to make genome-wide predictions of the locations of DNA melting bubbles using statistical mechanics models. Computationally, this poses the challenge that a generic search through all combinations of bubble starts and ends is quadratic. Results An efficient algorithm is described, which shows that the time complexity of the task is O(NlogN) rather than quadratic. The algorithm exploits that bubble lengths may be limited, but without a prior assumption of a maximal bubble length. No approximations, such as windowing, have been introduced to reduce the time complexity. More than just finding the bubbles, the algorithm produces a stitch profile, which is a probabilistic graphical model of bubbles and helical regions. The algorithm applies a probability peak finding method based on a hierarchical analysis of the energy barriers in the Poland-Scheraga model. Conclusion Exact and fast computation of genomic stitch profiles is thus feasible. Sequences of several megabases have been computed, only limited by computer memory. Possible applications are the genome-wide comparisons of bubbles with promotors, TSS, viral integration sites, and other melting-related regions. PMID:18637171
NASA Astrophysics Data System (ADS)
Meyer, Daniel W.; Jenny, Patrick
2013-08-01
Different simulation methods are applicable to study turbulent mixing. When applying probability density function (PDF) methods, turbulent transport, and chemical reactions appear in closed form, which is not the case in second moment closure methods (RANS). Moreover, PDF methods provide the entire joint velocity-scalar PDF instead of a limited set of moments. In PDF methods, however, a mixing model is required to account for molecular diffusion. In joint velocity-scalar PDF methods, mixing models should also account for the joint velocity-scalar statistics, which is often under appreciated in applications. The interaction by exchange with the conditional mean (IECM) model accounts for these joint statistics, but requires velocity-conditional scalar means that are expensive to compute in spatially three dimensional settings. In this work, two alternative mixing models are presented that provide more accurate PDF predictions at reduced computational cost compared to the IECM model, since no conditional moments have to be computed. All models are tested for different mixing benchmark cases and their computational efficiencies are inspected thoroughly. The benchmark cases involve statistically homogeneous and inhomogeneous settings dealing with three streams that are characterized by two passive scalars. The inhomogeneous case clearly illustrates the importance of accounting for joint velocity-scalar statistics in the mixing model. Failure to do so leads to significant errors in the resulting scalar means, variances and other statistics.
NASA Astrophysics Data System (ADS)
Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank
2014-01-01
In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.
NASA Astrophysics Data System (ADS)
Álvarez, Gabriel; Martínez Alonso, Luis; Medina, Elena
2011-07-01
We present a method to compute the genus expansion of the free energy of Hermitian matrix models from the large N expansion of the recurrence coefficients of the associated family of orthogonal polynomials. The method is based on the Bleher-Its deformation of the model, on its associated integral representation of the free energy, and on a method for solving the string equation which uses the resolvent of the Lax operator of the underlying Toda hierarchy. As a byproduct we obtain an efficient algorithm to compute generating functions for the enumeration of labeled k-maps which does not require the explicit expressions of the coefficients of the topological expansion. Finally we discuss the regularization of singular one-cut models within this approach.
NASA Technical Reports Server (NTRS)
Almroth, B. O.; Stehlin, P.; Brogan, F. A.
1981-01-01
A method for improving the efficiency of nonlinear structural analysis by the use of global displacement functions is presented. The computer programs include options to define the global functions as input or let the program automatically select and update these functions. The program was applied to a number of structures: (1) 'pear-shaped cylinder' in compression, (2) bending of a long cylinder, (3) spherical shell subjected to point force, (4) panel with initial imperfections, (5) cylinder with cutouts. The sample cases indicate the usefulness of the procedure in the solution of nonlinear structural shell problems by the finite element method. It is concluded that the use of global functions for extrapolation will lead to savings in computer time.
A more efficient formulation for computation of the maximum loading points in electric power systems
Chiang, H.D.; Jean-Jumeau, R.
1995-05-01
This paper presents a more efficient formulation for computation of the maximum loading points. A distinguishing feature of the new formulation is that it is of dimension (n + 1), instead of the existing formulation of dimension (2n + 1), for n-dimensional load flow equations. This feature makes computation of the maximum loading points very inexpensive in comparison with those required in the existing formulation. A theoretical basis for the new formulation is provided. The new problem formulation is derived by using a simple reparameterization scheme and exploiting the special properties of the power flow model. Moreover, the proposed test function is shown to be monotonic in the vicinity of a maximum loading point. Therefore, it allows one to monitor the approach to maximum loading points during the solution search process. Simulation results on a 234-bus system are presented.
Modeling weakly-ionized plasmas in magnetic field: A new computationally-efficient approach
NASA Astrophysics Data System (ADS)
Parent, Bernard; Macheret, Sergey O.; Shneider, Mikhail N.
2015-11-01
Despite its success at simulating accurately both non-neutral and quasi-neutral weakly-ionized plasmas, the drift-diffusion model has been observed to be a particularly stiff set of equations. Recently, it was demonstrated that the stiffness of the system could be relieved by rewriting the equations such that the potential is obtained from Ohm's law rather than Gauss's law while adding some source terms to the ion transport equation to ensure that Gauss's law is satisfied in non-neutral regions. Although the latter was applicable to multicomponent and multidimensional plasmas, it could not be used for plasmas in which the magnetic field was significant. This paper hence proposes a new computationally-efficient set of electron and ion transport equations that can be used not only for a plasma with multiple types of positive and negative ions, but also for a plasma in magnetic field. Because the proposed set of equations is obtained from the same physical model as the conventional drift-diffusion equations without introducing new assumptions or simplifications, it results in the same exact solution when the grid is refined sufficiently while being more computationally efficient: not only is the proposed approach considerably less stiff and hence requires fewer iterations to reach convergence but it yields a converged solution that exhibits a significantly higher resolution. The combined faster convergence and higher resolution is shown to result in a hundredfold increase in computational efficiency for some typical steady and unsteady plasma problems including non-neutral cathode and anode sheaths as well as quasi-neutral regions.
An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates
Khan, Usman; Falconi, Christian
2014-01-01
Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214
Hierarchy of Efficiently Computable and Faithful Lower Bounds to Quantum Discord.
Piani, Marco
2016-08-19
Quantum discord expresses a fundamental nonclassicality of correlations that is more general than entanglement, but that, in its standard definition, is not easily evaluated. We derive a hierarchy of computationally efficient lower bounds to the standard quantum discord. Every nontrivial element of the hierarchy constitutes by itself a valid discordlike measure, based on a fundamental feature of quantum correlations: their lack of shareability. Our approach emphasizes how the difference between entanglement and discord depends on whether shareability is intended as a static property or as a dynamical process. PMID:27588837
Lee, Hua
2016-04-01
The main focus of this paper is the design and formulation of a computationally efficient approach to the estimation of the angle of arrival with non-uniform reconfigurable receiver arrays. Subsequent to demodulation and matched filtering, the main signal processing task is a double-integration operation. The simplicity of this algorithm enables the implementation of the estimation procedure with simple operational amplifier (op-amp) circuits for real-time realization. This technique does not require uniform and structured array configurations, and is most effective for the estimation of angle of arrival with dynamically reconfigurable receiver arrays.
Hierarchy of Efficiently Computable and Faithful Lower Bounds to Quantum Discord
NASA Astrophysics Data System (ADS)
Piani, Marco
2016-08-01
Quantum discord expresses a fundamental nonclassicality of correlations that is more general than entanglement, but that, in its standard definition, is not easily evaluated. We derive a hierarchy of computationally efficient lower bounds to the standard quantum discord. Every nontrivial element of the hierarchy constitutes by itself a valid discordlike measure, based on a fundamental feature of quantum correlations: their lack of shareability. Our approach emphasizes how the difference between entanglement and discord depends on whether shareability is intended as a static property or as a dynamical process.
NASA Astrophysics Data System (ADS)
Allphin, Devin
Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative
Ivanov, Mikhail V; Babikov, Dmitri
2012-05-14
Efficient method is proposed for computing thermal rate constant of recombination reaction that proceeds according to the energy transfer mechanism, when an energized molecule is formed from reactants first, and is stabilized later by collision with quencher. The mixed quantum-classical theory for the collisional energy transfer and the ro-vibrational energy flow [M. Ivanov and D. Babikov, J. Chem. Phys. 134, 144107 (2011)] is employed to treat the dynamics of molecule + quencher collision. Efficiency is achieved by sampling simultaneously (i) the thermal collision energy, (ii) the impact parameter, and (iii) the incident direction of quencher, as well as (iv) the rotational state of energized molecule. This approach is applied to calculate third-order rate constant of the recombination reaction that forms the (16)O(18)O(16)O isotopomer of ozone. Comparison of the predicted rate vs. experimental result is presented.
Lin, X; Chen, T; Liu, J; Jiang, T; Yu, D; Shen, S G F
2015-01-01
We investigated the accuracy of point-based superimposition of a digital dental model on to a 3-dimensional computed tomographic (CT) skull with intact dentition. The physical model was scanned by CT to give a virtual skull model, and a plaster dental model was taken and laser-scanned to give a digital dental model. Three different background investigators were recruited and calibrated to make the point-based superimposition, and afterwards were asked to repeat 5 superimpositions each. Five bone-to-tooth measurements for the maxilla and 6 for the mandible were selected to indicate the relation of teeth to skull. Repeated measures were made on the physical model to act as a control group, and on the virtual model to act as the test group. The absolute agreement intra-class correlation coefficient (ICC) was used to assess the intra/inter-investigator reliability; Bland-Altman analysis was used to calculate the general differences, limits of agreement, and precision ranges of the estimated limits. Inter/intra-investigator reliability was excellent with ICC varying from 0.986 to 1; Bland-Altman analysis indicated that general difference was 0.01 (0.25)mm, the upper limit of agreement was 0.50mm and the lower limit -0.47 mm, and the precision range for the upper limit was 0.43 mm to 0.57 mm and for the lower limit -0.54 mm to -0.40 mm. Clinically acceptable accuracy can be achieved using a direct point-based method to superimpose a digital dental model on to a 3-dimensional CT skull.
WANG, GANG; WU, YIFEN; ZHANG, ZHENTAO; ZHENG, XIAOLIN; ZHANG, YULAN; LIANG, MANQIU; YUAN, HUANCHU; SHEN, HAIPING; LI, DEWEI
2016-01-01
The aim of the present study was to investigate the effect of heart rate (HR) on the diagnostic accuracy of 256-slice computed tomography angiography (CTA) in the detection of coronary artery stenosis. Coronary imaging was performed using a Philips 256-slice spiral CT, and receiver operating characteristic (ROC) curve analysis was conducted to evaluate the diagnostic value of 256-slice CTA in coronary artery stenosis. The HR of the research subjects in the study was within a certain range (39–107 bpm). One hundred patients suspected of coronary heart disease underwent 256-slice CTA examination. The cases were divided into three groups: Low HR (HR <75 bpm), moderate HR (75≤ HR <90 bpm) and high HR (HR ≥90 bpm). For the three groups, two observers independently assessed the image quality for all coronary segments on a four-point ordinal scale. An image quality of grades 1–3 was considered diagnostic, while grade 4 was non-diagnostic. A total of 97.76% of the images were diagnostic in the low-HR group, 96.86% in the moderate-HR group and 95.80% in the high-HR group. According to the ROC curve analysis, the specificity of CTA in diagnosing coronary artery stenosis was 98.40, 96.00 and 97.60% in the low-, moderate- and high-HR groups, respectively. In conclusion, 256-slice coronary CTA can be used to clearly show the main segments of the coronary artery and to effectively diagnose coronary artery stenosis. Within the range of HRs investigated, HR was found to have no significant effect on the diagnostic accuracy of 256-slice coronary CTA for coronary artery stenosis. PMID:27168831
Estepp, Justin R.; Christensen, James C.
2015-01-01
The passive brain-computer interface (pBCI) framework has been shown to be a very promising construct for assessing cognitive and affective state in both individuals and teams. There is a growing body of work that focuses on solving the challenges of transitioning pBCI systems from the research laboratory environment to practical, everyday use. An interesting issue is what impact methodological variability may have on the ability to reliably identify (neuro)physiological patterns that are useful for state assessment. This work aimed at quantifying the effects of methodological variability in a pBCI design for detecting changes in cognitive workload. Specific focus was directed toward the effects of replacing electrodes over dual sessions (thus inducing changes in placement, electromechanical properties, and/or impedance between the electrode and skin surface) on the accuracy of several machine learning approaches in a binary classification problem. In investigating these methodological variables, it was determined that the removal and replacement of the electrode suite between sessions does not impact the accuracy of a number of learning approaches when trained on one session and tested on a second. This finding was confirmed by comparing to a control group for which the electrode suite was not replaced between sessions. This result suggests that sensors (both neurological and peripheral) may be removed and replaced over the course of many interactions with a pBCI system without affecting its performance. Future work on multi-session and multi-day pBCI system use should seek to replicate this (lack of) effect between sessions in other tasks, temporal time courses, and data analytic approaches while also focusing on non-stationarity and variable classification performance due to intrinsic factors. PMID:25805963
Shahidi, Shoaleh; Zadeh, Nahal Kazerooni; Sharafeddin, Farahnaz; Shahab, Shahriar; Bahrampour, Ehsan; Hamedani, Shahram
2015-01-01
Background: This study was aimed to compare the diagnostic accuracy and feasibility of cone beam computed tomography (CBCT) with phosphor storage plate (PSP) in detection of simulated occlusal secondary caries. Materials and Methods: In this in vitro descriptive-comparative study, a total of 80 slots of class I cavities were prepared on 80 extracted human premolars. Then, 40 teeth were randomly selected out of this sample and artificial carious lesions were created on these teeth by a round diamond bur no. 1/2. All 80 teeth were restored with amalgam fillings and radiographs were taken, both with PSP system and CBCT. All images were evaluated by three calibrated observers. The area under the receiver operating characteristic curve was used to compare the diagnostic accuracy of two systems. SPSS (SPSS Inc., Chicago, IL, USA) was adopted for statistical analysis. The difference between Az value of bitewing and CBCT methods were compared by pairwise comparison method. The inter- and intra-operator agreement was assessed by kappa analysis (P < 0.05). Results: The mean Az value for bitewings and CBCT was 0.903 and 0.994, respectively. Significant differences were found between PSP and CBCT (P = 0.010). The kappa value for inter-observer agreement was 0.68 and 0.76 for PSP and CBCT, respectively. The kappa value for intra-observer agreement was 0.698 (observer 1, P = 0.000), 0.766 (observer 2, P = 0.000) and 0.716 (observer 3, P = 0.000) in PSP method, and 0.816 (observer 1, P = 0.000), 0.653 (observer 2, P = 0.000) and 0.744 (observer 3, P = 0.000) in CBCT method. Conclusion: This in vitro study, with a limited number of samples, showed that the New Tom VGI Flex CBCT system was more accurate than the PSP in detecting the simulated small secondary occlusal caries under amalgam restoration. PMID:25878682
NASA Astrophysics Data System (ADS)
Müller-Putz, G. R.; Daly, I.; Kaiser, V.
2014-06-01
Objective. Assimilating the diagnosis complete spinal cord injury (SCI) takes time and is not easy, as patients know that there is no ‘cure' at the present time. Brain-computer interfaces (BCIs) can facilitate daily living. However, inter-subject variability demands measurements with potential user groups and an understanding of how they differ to healthy users BCIs are more commonly tested with. Thus, a three-class motor imagery (MI) screening (left hand, right hand, feet) was performed with a group of 10 able-bodied and 16 complete spinal-cord-injured people (paraplegics, tetraplegics) with the objective of determining what differences were present between the user groups and how they would impact upon the ability of these user groups to interact with a BCI. Approach. Electrophysiological differences between patient groups and healthy users are measured in terms of sensorimotor rhythm deflections from baseline during MI, electroencephalogram microstate scalp maps and strengths of inter-channel phase synchronization. Additionally, using a common spatial pattern algorithm and a linear discriminant analysis classifier, the classification accuracy was calculated and compared between groups. Main results. It is seen that both patient groups (tetraplegic and paraplegic) have some significant differences in event-related desynchronization strengths, exhibit significant increases in synchronization and reach significantly lower accuracies (mean (M) = 66.1%) than the group of healthy subjects (M = 85.1%). Significance. The results demonstrate significant differences in electrophysiological correlates of motor control between healthy individuals and those individuals who stand to benefit most from BCI technology (individuals with SCI). They highlight the difficulty in directly translating results from healthy subjects to participants with SCI and the challenges that, therefore, arise in providing BCIs to such individuals.
Vela, Sergi; Fumanal, Maria; Ribas-Arino, Jordi; Robert, Vincent
2015-07-01
The DFT + U methodology is regarded as one of the most-promising strategies to treat the solid state of molecular materials, as it may provide good energetic accuracy at a moderate computational cost. However, a careful parametrization of the U-term is mandatory since the results may be dramatically affected by the selected value. Herein, we benchmarked the Hubbard-like U-term for seven Fe(ii)N6-based pseudo-octahedral spin crossover (SCO) compounds, using as a reference an estimation of the electronic enthalpy difference (ΔHelec) extracted from experimental data (T1/2, ΔS and ΔH). The parametrized U-value obtained for each of those seven compounds ranges from 2.37 eV to 2.97 eV, with an average value of U = 2.65 eV. Interestingly, we have found that this average value can be taken as a good starting point since it leads to an unprecedented mean absolute error (MAE) of only 4.3 kJ mol(-1) in the evaluation of ΔHelec for the studied compounds. Moreover, by comparing our results on the solid state and the gas phase of the materials, we quantify the influence of the intermolecular interactions on the relative stability of the HS and LS states, with an average effect of ca. 5 kJ mol(-1), whose sign cannot be generalized. Overall, the findings reported in this manuscript pave the way for future studies devoted to understand the crystalline phase of SCO compounds, or the adsorption of individual molecules on organic or metallic surfaces, in which the rational incorporation of the U-term within DFT + U yields the required energetic accuracy that is dramatically missing when using bare-DFT functionals.
A Computationally-Efficient Inverse Approach to Probabilistic Strain-Based Damage Diagnosis
NASA Technical Reports Server (NTRS)
Warner, James E.; Hochhalter, Jacob D.; Leser, William P.; Leser, Patrick E.; Newman, John A
2016-01-01
This work presents a computationally-efficient inverse approach to probabilistic damage diagnosis. Given strain data at a limited number of measurement locations, Bayesian inference and Markov Chain Monte Carlo (MCMC) sampling are used to estimate probability distributions of the unknown location, size, and orientation of damage. Substantial computational speedup is obtained by replacing a three-dimensional finite element (FE) model with an efficient surrogate model. The approach is experimentally validated on cracked test specimens where full field strains are determined using digital image correlation (DIC). Access to full field DIC data allows for testing of different hypothetical sensor arrangements, facilitating the study of strain-based diagnosis effectiveness as the distance between damage and measurement locations increases. The ability of the framework to effectively perform both probabilistic damage localization and characterization in cracked plates is demonstrated and the impact of measurement location on uncertainty in the predictions is shown. Furthermore, the analysis time to produce these predictions is orders of magnitude less than a baseline Bayesian approach with the FE method by utilizing surrogate modeling and effective numerical sampling approaches.
An efficient algorithm to compute row and column counts for sparse Cholesky factorization
Gilbert, J.R. ); Ng, E.G.; Peyton, B.W. )
1992-09-01
Let an undirected graph G be given, along with a specified depth- first spanning tree T. We give almost-linear-time algorithms to solve the following two problems: First, for every vertex v, compute the number of descendants w of v for which some descendant of w is adjacent (in G) to v. Second, for every vertx v, compute the number of ancestors of v that are adjacent (in G) to at least one descendant of v. These problems arise in Cholesky and QR factorizations of sparse matrices. Our algorithms can be used to determine the number of nonzero entries in each row and column of the triangular factor of a matrix from the zero/nonzero structure of the matrix. Such a prediction makes storage allocation for sparse matrix factorizations more efficient. Our algorithms run in time linear in the size of the input times a slowly-growing inverse of Ackermann's function. The best previously known algorithms for these problems ran in time linear in the sum of the nonzero counts, which is usually much larger. We give experimental results demonstrating the practical efficiency of the new algorithms.
An efficient algorithm to compute row and column counts for sparse Cholesky factorization
Gilbert, J.R.; Ng, E.G.; Peyton, B.W.
1992-09-01
Let an undirected graph G be given, along with a specified depth- first spanning tree T. We give almost-linear-time algorithms to solve the following two problems: First, for every vertex v, compute the number of descendants w of v for which some descendant of w is adjacent (in G) to v. Second, for every vertx v, compute the number of ancestors of v that are adjacent (in G) to at least one descendant of v. These problems arise in Cholesky and QR factorizations of sparse matrices. Our algorithms can be used to determine the number of nonzero entries in each row and column of the triangular factor of a matrix from the zero/nonzero structure of the matrix. Such a prediction makes storage allocation for sparse matrix factorizations more efficient. Our algorithms run in time linear in the size of the input times a slowly-growing inverse of Ackermann`s function. The best previously known algorithms for these problems ran in time linear in the sum of the nonzero counts, which is usually much larger. We give experimental results demonstrating the practical efficiency of the new algorithms.
Silver, Nathaniel W; King, Bracken M; Nalam, Madhavi N L; Cao, Hong; Ali, Akbar; Kiran Kumar Reddy, G S; Rana, Tariq M; Schiffer, Celia A; Tidor, Bruce
2013-11-12
Here we present a novel, end-point method using the dead-end-elimination and A* algorithms to efficiently and accurately calculate the change in free energy, enthalpy, and configurational entropy of binding for ligand-receptor association reactions. We apply the new approach to the binding of a series of human immunodeficiency virus (HIV-1) protease inhibitors to examine the effect ensemble reranking has on relative accuracy as well as to evaluate the role of the absolute and relative ligand configurational entropy losses upon binding in affinity differences for structurally related inhibitors. Our results suggest that most thermodynamic parameters can be estimated using only a small fraction of the full configurational space, and we see significant improvement in relative accuracy when using an ensemble versus single-conformer approach to ligand ranking. We also find that using approximate metrics based on the single-conformation enthalpy differences between the global minimum energy configuration in the bound as well as unbound states also correlates well with experiment. Using a novel, additive entropy expansion based on conditional mutual information, we also analyze the source of ligand configurational entropy loss upon binding in terms of both uncoupled per degree of freedom losses as well as changes in coupling between inhibitor degrees of freedom. We estimate entropic free energy losses of approximately +24 kcal/mol, 12 kcal/mol of which stems from loss of translational and rotational entropy. Coupling effects contribute only a small fraction to the overall entropy change (1-2 kcal/mol) but suggest differences in how inhibitor dihedral angles couple to each other in the bound versus unbound states. The importance of accounting for flexibility in drug optimization and design is also discussed.
Silver, Nathaniel W; King, Bracken M; Nalam, Madhavi N L; Cao, Hong; Ali, Akbar; Kiran Kumar Reddy, G S; Rana, Tariq M; Schiffer, Celia A; Tidor, Bruce
2013-11-12
Here we present a novel, end-point method using the dead-end-elimination and A* algorithms to efficiently and accurately calculate the change in free energy, enthalpy, and configurational entropy of binding for ligand-receptor association reactions. We apply the new approach to the binding of a series of human immunodeficiency virus (HIV-1) protease inhibitors to examine the effect ensemble reranking has on relative accuracy as well as to evaluate the role of the absolute and relative ligand configurational entropy losses upon binding in affinity differences for structurally related inhibitors. Our results suggest that most thermodynamic parameters can be estimated using only a small fraction of the full configurational space, and we see significant improvement in relative accuracy when using an ensemble versus single-conformer approach to ligand ranking. We also find that using approximate metrics based on the single-conformation enthalpy differences between the global minimum energy configuration in the bound as well as unbound states also correlates well with experiment. Using a novel, additive entropy expansion based on conditional mutual information, we also analyze the source of ligand configurational entropy loss upon binding in terms of both uncoupled per degree of freedom losses as well as changes in coupling between inhibitor degrees of freedom. We estimate entropic free energy losses of approximately +24 kcal/mol, 12 kcal/mol of which stems from loss of translational and rotational entropy. Coupling effects contribute only a small fraction to the overall entropy change (1-2 kcal/mol) but suggest differences in how inhibitor dihedral angles couple to each other in the bound versus unbound states. The importance of accounting for flexibility in drug optimization and design is also discussed. PMID:24250277
2013-01-01
Here we present a novel, end-point method using the dead-end-elimination and A* algorithms to efficiently and accurately calculate the change in free energy, enthalpy, and configurational entropy of binding for ligand–receptor association reactions. We apply the new approach to the binding of a series of human immunodeficiency virus (HIV-1) protease inhibitors to examine the effect ensemble reranking has on relative accuracy as well as to evaluate the role of the absolute and relative ligand configurational entropy losses upon binding in affinity differences for structurally related inhibitors. Our results suggest that most thermodynamic parameters can be estimated using only a small fraction of the full configurational space, and we see significant improvement in relative accuracy when using an ensemble versus single-conformer approach to ligand ranking. We also find that using approximate metrics based on the single-conformation enthalpy differences between the global minimum energy configuration in the bound as well as unbound states also correlates well with experiment. Using a novel, additive entropy expansion based on conditional mutual information, we also analyze the source of ligand configurational entropy loss upon binding in terms of both uncoupled per degree of freedom losses as well as changes in coupling between inhibitor degrees of freedom. We estimate entropic free energy losses of approximately +24 kcal/mol, 12 kcal/mol of which stems from loss of translational and rotational entropy. Coupling effects contribute only a small fraction to the overall entropy change (1–2 kcal/mol) but suggest differences in how inhibitor dihedral angles couple to each other in the bound versus unbound states. The importance of accounting for flexibility in drug optimization and design is also discussed. PMID:24250277
Naser, Asieh Zamani; Mehr, Bahar Behdad
2013-01-01
Background: Cross- sectional tomograms have been used for optimal pre-operative planning of dental implant placement. The aim of the present study was to assess the accuracy of Cone Beam Computed Tomography (CBCT) measurements of specific distances around the mandibular canal by comparing them to those obtained from Multi-Slice Computed Tomography (MSCT) images. Materials and Methods: Ten hemi-mandible specimens were examined using CBCT and MSCT. Before imaging, wires were placed at 7 locations between the anterior margin of the third molar and the anterior margin of the second premolar as reference points. Following distances were measured by two observers on each cross-sectional CBCT and MSCT image: Mandibular Width (W), Length (L), Upper Distance (UD), Lower Distance (LD), Buccal Distance (BD), and Lingual Distance (LID). The obtained data were evaluated using SPSS software, applying paired t-test and intra-class correlation coefficient (ICC). Results: There was a significant difference between the values obtained by MSCT and CBCT measurement for all areas such as H, W, UD, LD, BD, and LID, (P < 0.001), with a difference less than 1 mm. The ICC for all distances by both techniques, measured by a single observer with a one week interval and between 2 observers was 99% and 98%, respectively. Comparing the obtained data of both techniques indicates that the difference between two techniques is 2.17% relative to MSCT. Conclusion: The results of this study showed that there is significant difference between measurements obtained by CBCT and MSCT. However, the difference is not clinically significant. PMID:23878558
Kraus, Michael; Weiskopf, Julia; Dreyhaupt, Jens; Krischak, Gert; Gebhard, Florian
2014-01-01
Study Design A retrospective analysis of a prospective database. Objective Meta-analyses suggest that computer-assisted systems can increase the accuracy of pedicle screw placement for dorsal spinal fusion procedures. The results of further meta-analyses report that in the thoracic spine, both the methods have comparable placement accuracy. These studies are limited due to an abundance of screw classification systems. The aim of this study was to assess the placement accuracy and potentially influencing factors of three-dimensionally navigated versus conventionally inserted pedicle screws. Methods This was a retrospective analysis of a prospective database at a level I trauma center of pedicle screw placement (computer-navigated versus traditionally placed) for dorsal spinal stabilizations. The cases spanned a 5.5-year study period (January 1, 2005, to June 30, 2010). The perforations of the pedicle were differentiated in three grades based on the postoperative computed tomography. Results The overall placement accuracy was 86% in the conventional group versus 79% in the computer-navigated group (grade 0). The computer-navigated procedures were superior in the lumbar spine and the conventional procedures were superior in the thoracic spine, but both failed to be of statistical significance. The level of experience of the performing surgeon and the patient's body mass index did not influence the placement accuracy. The only significant influence was the spinal segment: the higher the spinal level where the fusion was performed, the more likely the screw was displaced. Conclusions The computer-navigated and conventional methods are both safe procedures to place transpedicular screws at the traumatized thoracic and lumbar spine. At the moment, three-dimensionally based navigation does not significantly increase the placement accuracy. PMID:25844281
Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen
2002-12-10
Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the
NASA Astrophysics Data System (ADS)
Sundareshan, Malur K.
2002-07-01
Computational complexity is a major impediment to the real- time implementation of image restoration and super- resolution algorithms. Although powerful restoration algorithms have been developed within the last few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require enough number of iterations to be executed to achieve desired resolution gains in order to meaningfully perform detection and recognition tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture mega-pixel imagery data at video frame rates. A major challenge in the processing of these large format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and super- resolution algorithms is of significant practical interest and will be the primary focus of this paper. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate pre-processing and post-processing steps together with the super-resolution iterations in order to tailor optimized overall processing sequences for imagery data of specific formats. Three distinct methods for tailoring a pre-processing filter and integrating it with the super-resolution processing steps will be outlined in this paper. These methods consist of a Region-of-Interest (ROI) extraction scheme, a background- detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared to the super-resolution iterations. A
NASA Astrophysics Data System (ADS)
Sundareshan, Malur K.; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen
2002-12-01
Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the
NASA Technical Reports Server (NTRS)
Ferlemann, Paul G.
2000-01-01
A solution methodology has been developed to efficiently model multi-specie, chemically frozen, thermally perfect gas mixtures. The method relies on the ability to generate a single (composite) set of thermodynamic and transport coefficients prior to beginning a CFD solution. While not fundamentally a new concept, many applied CFD users are not aware of this capability nor have a mechanism to easily and confidently generate new coefficients. A database of individual specie property coefficients has been created for 48 species. The seven coefficient form of the thermodynamic functions is currently used rather then the ten coefficient form due to the similarity of the calculated properties, low temperature behavior and reduced CPU requirements. Sutherland laminar viscosity and thermal conductivity coefficients were computed in a consistent manner from available reference curves. A computer program has been written to provide CFD users with a convenient method to generate composite specie coefficients for any mixture. Mach 7 forebody/inlet calculations demonstrated nearly equivalent results and significant CPU time savings compared to a multi-specie solution approach. Results from high-speed combustor analysis also illustrate the ability to model inert test gas contaminants without additional computational expense.
Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.
2015-01-01
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211
Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits
Ujfalussy, Balázs B; Makara, Judit K; Branco, Tiago; Lengyel, Máté
2015-01-01
Cortical neurons integrate thousands of synaptic inputs in their dendrites in highly nonlinear ways. It is unknown how these dendritic nonlinearities in individual cells contribute to computations at the level of neural circuits. Here, we show that dendritic nonlinearities are critical for the efficient integration of synaptic inputs in circuits performing analog computations with spiking neurons. We developed a theory that formalizes how a neuron's dendritic nonlinearity that is optimal for integrating synaptic inputs depends on the statistics of its presynaptic activity patterns. Based on their in vivo preynaptic population statistics (firing rates, membrane potential fluctuations, and correlations due to ensemble dynamics), our theory accurately predicted the responses of two different types of cortical pyramidal cells to patterned stimulation by two-photon glutamate uncaging. These results reveal a new computational principle underlying dendritic integration in cortical neurons by suggesting a functional link between cellular and systems--level properties of cortical circuits. DOI: http://dx.doi.org/10.7554/eLife.10056.001 PMID:26705334
NASA Astrophysics Data System (ADS)
Forouzan, Amir R.; Garth, Lee M.
2007-12-01
Line selection (LS), tone selection (TS), and joint tone-line selection (JTLS) partial crosstalk cancellers have been proposed to reduce the online computational complexity of far-end crosstalk (FEXT) cancellers in digital subscriber lines (DSL). However, when the crosstalk profile changes rapidly over time, there is an additional requirement that the partial crosstalk cancellers, particularly the LS and JTLS schemes, should also provide a low preprocessing complexity. This is in contrast to the case for perfect crosstalk cancellers. In this paper, we propose two novel channel matrix inversion methods, the approximate inverse (AI) and reduced inverse (RI) schemes, which reduce the recurrent complexity of the LS and JTLS schemes. Moreover, we propose two new classes of JTLS algorithms, the subsort and Lagrange JTLS algorithms, with significantly lower computational complexity than the recently proposed optimal greedy JTLS scheme. The computational complexity analysis of our algorithms shows that they provide much lower recurrent complexities than the greedy JTLS algorithm, allowing them to work efficiently in very fast time-varying crosstalk environments. Moreover, the analytical and simulation results demonstrate that our techniques are close to the optimal solution from the crosstalk cancellation point of view. The results also reveal that partial crosstalk cancellation is more beneficial in upstream DSL, particularly for short loops.
Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L
2016-01-01
Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples.
A universal and efficient method to compute maps from image-based prediction models.
Sabuncu, Mert R
2014-01-01
Discriminative supervised learning algorithms, such as Support Vector Machines, are becoming increasingly popular in biomedical image computing. One of their main uses is to construct image-based prediction models, e.g., for computer aided diagnosis or "mind reading." A major challenge in these applications is the biological interpretation of the machine learning models, which can be arbitrarily complex functions of the input features (e.g., as induced by kernel-based methods). Recent work has proposed several strategies for deriving maps that highlight regions relevant for accurate prediction. Yet most of these methods o n strong assumptions about t he prediction model (e.g., linearity, sparsity) and/or data (e.g., Gaussianity), or fail to exploit the covariance structure in the data. In this work, we propose a computationally efficient and universal framework for quantifying associations captured by black box machine learning models. Furthermore, our theoretical perspective reveals that examining associations with predictions, in the absence of ground truth labels, can be very informative. We apply the proposed method to machine learning models trained to predict cognitive impairment from structural neuroimaging data. We demonstrate that our approach yields biologically meaningful maps of association. PMID:25320819
Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L
2016-01-01
Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. PMID:27498635
NASA Astrophysics Data System (ADS)
Schneider, E.; a Beccara, S.; Mascherpa, F.; Faccioli, P.
2016-07-01
We introduce a theoretical approach to study the quantum-dissipative dynamics of electronic excitations in macromolecules, which enables to perform calculations in large systems and cover long-time intervals. All the parameters of the underlying microscopic Hamiltonian are obtained from ab initio electronic structure calculations, ensuring chemical detail. In the short-time regime, the theory is solvable using a diagrammatic perturbation theory, enabling analytic insight. To compute the time evolution of the density matrix at intermediate times, typically ≲ps , we develop a Monte Carlo algorithm free from any sign or phase problem, hence computationally efficient. Finally, the dynamics in the long-time and large-distance limit can be studied combining the microscopic calculations with renormalization group techniques to define a rigorous low-resolution effective theory. We benchmark our Monte Carlo algorithm against the results obtained in perturbation theory and using a semiclassical nonperturbative scheme. Then, we apply it to compute the intrachain charge mobility in a realistic conjugated polymer.
Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D
2015-01-01
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211
Computing the energy of a water molecule using multideterminants: A simple, efficient algorithm
Clark, Bryan K.; Morales, Miguel A; Mcminis, Jeremy; Kim, Jeongnim; Scuseria, Gustavo E
2011-01-01
Quantum Monte Carlo (QMC) methods such as variational Monte Carlo and fixed node diffusion Monte Carlo depend heavily on the quality of the trial wave function. Although Slater-Jastrow wave functions are the most commonly used variational ansatz in electronic structure, more sophisticated wave functions are critical to ascertaining new physics. One such wave function is the multi-Slater- Jastrow wave function which consists of a Jastrow function multiplied by the sum of Slater deter- minants. In this paper we describe a method for working with these wave functions in QMC codes that is easy to implement, efficient both in computational speed as well as memory, and easily par- allelized. The computational cost scales quadratically with particle number making this scaling no worse than the single determinant case and linear with the total number of excitations. Addition- ally, we implement this method and use it to compute the ground state energy of a water molecule. 2011 American Institute of Physics. [doi:10.1063/1.3665391
Efficient computation of net analyte signal vector in inverse multivariate calibration models.
Faber, N K
1998-12-01
The net analyte signal vector has been defined by Lorber as the part of a mixture spectrum that is unique for the analyte of interest; i.e., it is orthogonal to the spectra of the interferences. It plays a key role in the development of multivariate analytical figures of merit. Applications have been reported that imply its utility for spectroscopic wavelength selection as well as calibration method comparison. Currently available methods for computing the net analyte signal vector in inverse multivariate calibration models are based on the evaluation of projection matrices. Due to the size of these matrices (p × p, with p the number of wavelengths) the computation may be highly memory- and time-consuming. This paper shows that the net analyte signal vector can be obtained in a highly efficient manner by a suitable scaling of the regression vector. Computing the scaling factor only requires the evaluation of an inner product (p multiplications and additions). The mathematical form of the newly derived expression is discussed, and the generalization to multiway calibration models is briefly outlined.
Computationally Efficient Numerical Model for the Evolution of Directional Ocean Surface Waves
NASA Astrophysics Data System (ADS)
Malej, M.; Choi, W.; Goullet, A.
2011-12-01
The main focus of this work has been the asymptotic and numerical modeling of weakly nonlinear ocean surface wave fields. In particular, a development of an efficient numerical model for the evolution of nonlinear ocean waves, including extreme waves known as Rogue/Freak waves, is of direct interest. Due to their elusive and destructive nature, the media often portrays Rogue waves as unimaginatively huge and unpredictable monsters of the sea. To address some of these concerns, derivations of reduced phase-resolving numerical models, based on the small wave steepness assumption, are presented and their corresponding numerical simulations via Fourier pseudo-spectral methods are discussed. The simulations are initialized with a well-known JONSWAP wave spectrum and different angular distributions are employed. Both deterministic and Monte-Carlo ensemble average simulations were carried out. Furthermore, this work concerns the development of a new computationally efficient numerical model for the short term prediction of evolving weakly nonlinear ocean surface waves. The derivations are originally based on the work of West et al. (1987) and since the waves in the ocean tend to travel primarily in one direction, the aforementioned new numerical model is derived with an additional assumption of a weak transverse dependence. In turn, comparisons of the ensemble averaged randomly initialized spectra, as well as deterministic surface-to-surface correlations are presented. The new model is shown to behave well in various directional wave fields and can potentially be a candidate for computationally efficient prediction and propagation of extreme ocean surface waves - Rogue/Freak waves.
Soeria-Atmadja, D; Lundell, T; Gustafsson, M G; Hammerling, U
2006-01-01
The placing of novel or new-in-the-context proteins on the market, appearing in genetically modified foods, certain bio-pharmaceuticals and some household products leads to human exposure to proteins that may elicit allergic responses. Accurate methods to detect allergens are therefore necessary to ensure consumer/patient safety. We demonstrate that it is possible to reach a new level of accuracy in computational detection of allergenic proteins by presenting a novel detector, Detection based on Filtered Length-adjusted Allergen Peptides (DFLAP). The DFLAP algorithm extracts variable length allergen sequence fragments and employs modern machine learning techniques in the form of a support vector machine. In particular, this new detector shows hitherto unmatched specificity when challenged to the Swiss-Prot repository without appreciable loss of sensitivity. DFLAP is also the first reported detector that successfully discriminates between allergens and non-allergens occurring in protein families known to hold both categories. Allergenicity assessment for specific protein sequences of interest using DFLAP is possible via ulfh@slv.se.
Low-cost, high-performance and efficiency computational photometer design
NASA Astrophysics Data System (ADS)
Siewert, Sam B.; Shihadeh, Jeries; Myers, Randall; Khandhar, Jay; Ivanov, Vitaly
2014-05-01
Researchers at the University of Alaska Anchorage and University of Colorado Boulder have built a low cost high performance and efficiency drop-in-place Computational Photometer (CP) to test in field applications ranging from port security and safety monitoring to environmental compliance monitoring and surveying. The CP integrates off-the-shelf visible spectrum cameras with near to long wavelength infrared detectors and high resolution digital snapshots in a single device. The proof of concept combines three or more detectors into a single multichannel imaging system that can time correlate read-out, capture, and image process all of the channels concurrently with high performance and energy efficiency. The dual-channel continuous read-out is combined with a third high definition digital snapshot capability and has been designed using an FPGA (Field Programmable Gate Array) to capture, decimate, down-convert, re-encode, and transform images from two standard definition CCD (Charge Coupled Device) cameras at 30Hz. The continuous stereo vision can be time correlated to megapixel high definition snapshots. This proof of concept has been fabricated as a fourlayer PCB (Printed Circuit Board) suitable for use in education and research for low cost high efficiency field monitoring applications that need multispectral and three dimensional imaging capabilities. Initial testing is in progress and includes field testing in ports, potential test flights in un-manned aerial systems, and future planned missions to image harsh environments in the arctic including volcanic plumes, ice formation, and arctic marine life.
Measuring and tuning energy efficiency on large scale high performance computing platforms.
Laros, James H., III
2011-08-01
Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.
ERIC Educational Resources Information Center
Lee, Young-Jin
2012-01-01
This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…
The development of a computationally efficient high-resolution viscous-plastic sea ice model
NASA Astrophysics Data System (ADS)
Lemieux, Jean Francois
This thesis presents the development of a high-resolution viscous-plastic (VP) sea ice model. Because of the fine mesh and the size of the domain, an efficient and parallelizable numerical scheme is desirable. In a first step, we have implemented the nonlinear solver used in existing VP models (referred to as the standard solver). It is based on a linear solver and an outer loop (OL) iteration. For the linear solver, we introduced the preconditioned Generalized Minimum RESidual (pGMRES) method. The preconditioner is a line successive overrelaxation solver (SOR). When compared to the SOR and the line SOR (LSOR) methods, two solvers commonly used in the sea ice modeling community, pGMRES increases the computational efficiency by a factor of 16 and 3 respectively. For pGMRES, the symmetry of the system matrix is not a prerequisite. The Coriolis term and the off-diagonal part of the water drag can then be treated implicitly. Theoretical and simulation results show that this implicit treatment eliminates a numerical instability present with an explicit treatment. During this research, we have also observed that the approximate nonlinear solution converges slowly with the number of OL iterations. Furthermore, simulation results reveal: the existence of multiple solutions and occasional convergence failures of the nonlinear solver. For a time step comparable to the forcing time scale, a few OL iterations lead to errors in the velocity field that are of the same order of magnitude as the mean drift. The slow convergence is an issue at all spatial resolutions but is more severe as the grid is refined. It is attributed in part to the standard VP formulation that leads to a momentum equation that is not continuously differentiable. To obtain a smooth formulation, we replaced the standard viscous coefficient expression with capping by a hyperbolic tangent function. This provides a unique solution and reduces the computational time and failure rate. To further improve the
Howell, Bryan; Lad, Shivanand P; Grill, Warren M
2014-01-01
Spinal cord stimulation (SCS) is an alternative or adjunct therapy to treat chronic pain, a prevalent and clinically challenging condition. Although SCS has substantial clinical success, the therapy is still prone to failures, including lead breakage, lead migration, and poor pain relief. The goal of this study was to develop a computational model of SCS and use the model to compare activation of neural elements during intradural and extradural electrode placement. We constructed five patient-specific models of SCS. Stimulation thresholds predicted by the model were compared to stimulation thresholds measured intraoperatively, and we used these models to quantify the efficiency and selectivity of intradural and extradural SCS. Intradural placement dramatically incr