Science.gov

Sample records for accurate computational approach

  1. A new approach to compute accurate velocity of meteors

    NASA Astrophysics Data System (ADS)

    Egal, Auriane; Gural, Peter; Vaubaillon, Jeremie; Colas, Francois; Thuillot, William

    2016-10-01

    The CABERNET project was designed to push the limits of meteoroid orbit measurements by improving the determination of the meteors' velocities. Indeed, despite of the development of the cameras networks dedicated to the observation of meteors, there is still an important discrepancy between the measured orbits of meteoroids computed and the theoretical results. The gap between the observed and theoretic semi-major axis of the orbits is especially significant; an accurate determination of the orbits of meteoroids therefore largely depends on the computation of the pre-atmospheric velocities. It is then imperative to dig out how to increase the precision of the measurements of the velocity.In this work, we perform an analysis of different methods currently used to compute the velocities and trajectories of the meteors. They are based on the intersecting planes method developed by Ceplecha (1987), the least squares method of Borovicka (1990), and the multi-parameter fitting (MPF) method published by Gural (2012).In order to objectively compare the performances of these techniques, we have simulated realistic meteors ('fakeors') reproducing the different error measurements of many cameras networks. Some fakeors are built following the propagation models studied by Gural (2012), and others created by numerical integrations using the Borovicka et al. 2007 model. Different optimization techniques have also been investigated in order to pick the most suitable one to solve the MPF, and the influence of the geometry of the trajectory on the result is also presented.We will present here the results of an improved implementation of the multi-parameter fitting that allow an accurate orbit computation of meteors with CABERNET. The comparison of different velocities computation seems to show that if the MPF is by far the best method to solve the trajectory and the velocity of a meteor, the ill-conditioning of the costs functions used can lead to large estimate errors for noisy

  2. Computer-assisted adjuncts for aneurysmal morphologic assessment: toward more precise and accurate approaches

    NASA Astrophysics Data System (ADS)

    Rajabzadeh-Oghaz, Hamidreza; Varble, Nicole; Davies, Jason M.; Mowla, Ashkan; Shakir, Hakeem J.; Sonig, Ashish; Shallwani, Hussain; Snyder, Kenneth V.; Levy, Elad I.; Siddiqui, Adnan H.; Meng, Hui

    2017-03-01

    Neurosurgeons currently base most of their treatment decisions for intracranial aneurysms (IAs) on morphological measurements made manually from 2D angiographic images. These measurements tend to be inaccurate because 2D measurements cannot capture the complex geometry of IAs and because manual measurements are variable depending on the clinician's experience and opinion. Incorrect morphological measurements may lead to inappropriate treatment strategies. In order to improve the accuracy and consistency of morphological analysis of IAs, we have developed an image-based computational tool, AView. In this study, we quantified the accuracy of computer-assisted adjuncts of AView for aneurysmal morphologic assessment by performing measurement on spheres of known size and anatomical IA models. AView has an average morphological error of 0.56% in size and 2.1% in volume measurement. We also investigate the clinical utility of this tool on a retrospective clinical dataset and compare size and neck diameter measurement between 2D manual and 3D computer-assisted measurement. The average error was 22% and 30% in the manual measurement of size and aneurysm neck diameter, respectively. Inaccuracies due to manual measurements could therefore lead to wrong treatment decisions in 44% and inappropriate treatment strategies in 33% of the IAs. Furthermore, computer-assisted analysis of IAs improves the consistency in measurement among clinicians by 62% in size and 82% in neck diameter measurement. We conclude that AView dramatically improves accuracy for morphological analysis. These results illustrate the necessity of a computer-assisted approach for the morphological analysis of IAs.

  3. An efficient approach to converting three-dimensional image data into highly accurate computational models.

    PubMed

    Young, P G; Beresford-West, T B H; Coward, S R L; Notarberardino, B; Walker, B; Abdul-Aziz, A

    2008-09-13

    Image-based meshing is opening up exciting new possibilities for the application of computational continuum mechanics methods (finite-element and computational fluid dynamics) to a wide range of biomechanical and biomedical problems that were previously intractable owing to the difficulty in obtaining suitably realistic models. Innovative surface and volume mesh generation techniques have recently been developed, which convert three-dimensional imaging data, as obtained from magnetic resonance imaging, computed tomography, micro-CT and ultrasound, for example, directly into meshes suitable for use in physics-based simulations. These techniques have several key advantages, including the ability to robustly generate meshes for topologies of arbitrary complexity (such as bioscaffolds or composite micro-architectures) and with any number of constituent materials (multi-part modelling), providing meshes in which the geometric accuracy of mesh domains is only dependent on the image accuracy (image-based accuracy) and the ability for certain problems to model material inhomogeneity by assigning the properties based on image signal strength. Commonly used mesh generation techniques will be compared with the proposed enhanced volumetric marching cubes (EVoMaCs) approach and some issues specific to simulations based on three-dimensional image data will be discussed. A number of case studies will be presented to illustrate how these techniques can be used effectively across a wide range of problems from characterization of micro-scaffolds through to head impact modelling.

  4. A streamline splitting pore-network approach for computationally inexpensive and accurate simulation of transport in porous media

    SciTech Connect

    Mehmani, Yashar; Oostrom, Martinus; Balhoff, Matthew

    2014-03-20

    Several approaches have been developed in the literature for solving flow and transport at the pore-scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modeling flow and transport at the pore-scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and excellent matches were obtained against micromodel experiments across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3D disordered granular media.

  5. Combining computer algorithms with experimental approaches permits the rapid and accurate identification of T cell epitopes from defined antigens.

    PubMed

    Schirle, M; Weinschenk, T; Stevanović, S

    2001-11-01

    The identification of T cell epitopes from immunologically relevant antigens remains a critical step in the development of vaccines and methods for monitoring of T cell responses. This review presents an overview of strategies that employ computer algorithms for the selection of candidate peptides from defined proteins and subsequent verification of their in vivo relevance by experimental approaches. Several computer algorithms are currently being used for epitope prediction of various major histocompatibility complex (MHC) class I and II molecules, based either on the analysis of natural MHC ligands or on the binding properties of synthetic peptides. Moreover, the analysis of proteasomal digests of peptides and whole proteins has led to the development of algorithms for the prediction of proteasomal cleavages. In order to verify the generation of the predicted peptides during antigen processing in vivo as well as their immunogenic potential, several experimental approaches have been pursued in the recent past. Mass spectrometry-based bioanalytical approaches have been used specifically to detect predicted peptides among isolated natural ligands. Other strategies employ various methods for the stimulation of primary T cell responses against the predicted peptides and subsequent testing of the recognition pattern towards target cells that express the antigen.

  6. Hybrid Steered Molecular Dynamics Approach to Computing Absolute Binding Free Energy of Ligand-Protein Complexes: A Brute Force Approach That Is Fast and Accurate.

    PubMed

    Chen, Liao Y

    2015-04-14

    Computing the free energy of binding a ligand to a protein is a difficult task of essential importance for which purpose various theoretical/computational approaches have been pursued. In this paper, we develop a hybrid steered molecular dynamics (hSMD) method capable of resolving one ligand–protein complex within a few wall-clock days with high enough accuracy to compare with the experimental data. This hSMD approach is based on the relationship between the binding affinity and the potential of mean force (PMF) in the established literature. It involves simultaneously steering n (n = 1, 2, 3, ...) centers of mass of n selected segments of the ligand using n springs of infinite stiffness. Steering the ligand from a single initial state chosen from the bound state ensemble to the corresponding dissociated state, disallowing any fluctuations of the pulling centers along the way, one can determine a 3n-dimensional PMF curve connecting the two states by sampling a small number of forward and reverse pulling paths. This PMF constitutes a large but not the sole contribution to the binding free energy. Two other contributors are (1) the partial partition function containing the equilibrium fluctuations of the ligand at the binding site and the deviation of the initial state from the PMF minimum and (2) the partial partition function containing rotation and fluctuations of the ligand around one of the pulling centers that is fixed at a position far from the protein. We implement this hSMD approach for two ligand–protein complexes whose structures were determined and whose binding affinities were measured experimentally: caprylic acid binding to bovine β-lactoglobulin and glutathione binding to Schistosoma japonicum glutathione S-transferase tyrosine 7 to phenylalanine mutant. Our computed binding affinities agree with the experimental data within a factor of 1.5. The total time of computation for these two all-atom model systems (consisting of 96K and 114K atoms

  7. Accurate method for computing correlated color temperature.

    PubMed

    Li, Changjun; Cui, Guihua; Melgosa, Manuel; Ruan, Xiukai; Zhang, Yaoju; Ma, Long; Xiao, Kaida; Luo, M Ronnier

    2016-06-27

    For the correlated color temperature (CCT) of a light source to be estimated, a nonlinear optimization problem must be solved. In all previous methods available to compute CCT, the objective function has only been approximated, and their predictions have achieved limited accuracy. For example, different unacceptable CCT values have been predicted for light sources located on the same isotemperature line. In this paper, we propose to compute CCT using the Newton method, which requires the first and second derivatives of the objective function. Following the current recommendation by the International Commission on Illumination (CIE) for the computation of tristimulus values (summations at 1 nm steps from 360 nm to 830 nm), the objective function and its first and second derivatives are explicitly given and used in our computations. Comprehensive tests demonstrate that the proposed method, together with an initial estimation of CCT using Robertson's method [J. Opt. Soc. Am. 58, 1528-1535 (1968)], gives highly accurate predictions below 0.0012 K for light sources with CCTs ranging from 500 K to 106 K.

  8. Two-component density functional theory within the projector augmented-wave approach: Accurate and self-consistent computations of positron lifetimes and momentum distributions

    NASA Astrophysics Data System (ADS)

    Wiktor, Julia; Jomard, Gérald; Torrent, Marc

    2015-09-01

    Many techniques have been developed in the past in order to compute positron lifetimes in materials from first principles. However, there is still a lack of a fast and accurate self-consistent scheme that could handle accurately the forces acting on the ions induced by the presence of the positron. We will show in this paper that we have reached this goal by developing the two-component density functional theory within the projector augmented-wave (PAW) method in the open-source code abinit. This tool offers the accuracy of the all-electron methods with the computational efficiency of the plane-wave ones. We can thus deal with supercells that contain few hundreds to thousands of atoms to study point defects as well as more extended defects clusters. Moreover, using the PAW basis set allows us to use techniques able to, for instance, treat strongly correlated systems or spin-orbit coupling, which are necessary to study heavy elements, such as the actinides or their compounds.

  9. How Accurate Can a Local Coupled Cluster Approach Be in Computing the Activation Energies of Late-Transition-Metal-Catalyzed Reactions with Au, Pt, and Ir?

    PubMed

    Kang, Runhua; Lai, Wenzhen; Yao, Jiannian; Shaik, Sason; Chen, Hui

    2012-09-11

    To improve the accuracy of local coupled cluster (LCC) methods in computing activation energies, we propose herein a new computational scheme. Its applications to various types of late-transition-metal-catalyzed reactions involving Au, Pt, and Ir indicate that the new corrective approach for LCC methods can downsize the mean unsigned deviation and maximum deviation, from the CCSD(T)/CBS reference, to about 0.3 and 0.9 kcal/mol. Using this method, we also calibrated the performance of popular density functionals, with respect to the same test set of reactions. It is concluded that the best functional is the general-purpose double hybrid functional B2GP-PLYP. Other well-performing functionals include the "kinetic" functionals M06-2X and BMK, which have a large percentage of HF exchange, and general-purpose functionals like PBE0 and wB97X. Comparatively, general-purpose functionals like PBE0 and TPSSh perform much better than the tested "kinetic" functionals for Pt-/Ir-catalyzed reactions, while the opposite is true for Au-catalyzed reactions. In contrast, wB97X performs more uniformly in these two classes of reactions. These findings hint that even within the scope of late transition metals, different types of reactions may require different types of optimal DFT methods. Empirical dispersion correction of DFT was found to have a small or no effect on the studied reactions barriers.

  10. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  11. Accurate atom-mapping computation for biochemical reactions.

    PubMed

    Latendresse, Mario; Malerich, Jeremiah P; Travers, Mike; Karp, Peter D

    2012-11-26

    The complete atom mapping of a chemical reaction is a bijection of the reactant atoms to the product atoms that specifies the terminus of each reactant atom. Atom mapping of biochemical reactions is useful for many applications of systems biology, in particular for metabolic engineering where synthesizing new biochemical pathways has to take into account for the number of carbon atoms from a source compound that are conserved in the synthesis of a target compound. Rapid, accurate computation of the atom mapping(s) of a biochemical reaction remains elusive despite significant work on this topic. In particular, past researchers did not validate the accuracy of mapping algorithms. We introduce a new method for computing atom mappings called the minimum weighted edit-distance (MWED) metric. The metric is based on bond propensity to react and computes biochemically valid atom mappings for a large percentage of biochemical reactions. MWED models can be formulated efficiently as Mixed-Integer Linear Programs (MILPs). We have demonstrated this approach on 7501 reactions of the MetaCyc database for which 87% of the models could be solved in less than 10 s. For 2.1% of the reactions, we found multiple optimal atom mappings. We show that the error rate is 0.9% (22 reactions) by comparing these atom mappings to 2446 atom mappings of the manually curated Kyoto Encyclopedia of Genes and Genomes (KEGG) RPAIR database. To our knowledge, our computational atom-mapping approach is the most accurate and among the fastest published to date. The atom-mapping data will be available in the MetaCyc database later in 2012; the atom-mapping software will be available within the Pathway Tools software later in 2012.

  12. Computing Accurate Grammatical Feedback in a Virtual Writing Conference for German-Speaking Elementary-School Children: An Approach Based on Natural Language Generation

    ERIC Educational Resources Information Center

    Harbusch, Karin; Itsova, Gergana; Koch, Ulrich; Kuhner, Christine

    2009-01-01

    We built a natural language processing (NLP) system implementing a "virtual writing conference" for elementary-school children, with German as the target language. Currently, state-of-the-art computer support for writing tasks is restricted to multiple-choice questions or quizzes because automatic parsing of the often ambiguous and fragmentary…

  13. Computing Accurate Grammatical Feedback in a Virtual Writing Conference for German-Speaking Elementary-School Children: An Approach Based on Natural Language Generation

    ERIC Educational Resources Information Center

    Harbusch, Karin; Itsova, Gergana; Koch, Ulrich; Kuhner, Christine

    2009-01-01

    We built a natural language processing (NLP) system implementing a "virtual writing conference" for elementary-school children, with German as the target language. Currently, state-of-the-art computer support for writing tasks is restricted to multiple-choice questions or quizzes because automatic parsing of the often ambiguous and fragmentary…

  14. Photoacoustic computed tomography without accurate ultrasonic transducer responses

    NASA Astrophysics Data System (ADS)

    Sheng, Qiwei; Wang, Kun; Xia, Jun; Zhu, Liren; Wang, Lihong V.; Anastasio, Mark A.

    2015-03-01

    Conventional photoacoustic computed tomography (PACT) image reconstruction methods assume that the object and surrounding medium are described by a constant speed-of-sound (SOS) value. In order to accurately recover fine structures, SOS heterogeneities should be quantified and compensated for during PACT reconstruction. To address this problem, several groups have proposed hybrid systems that combine PACT with ultrasound computed tomography (USCT). In such systems, a SOS map is reconstructed first via USCT. Consequently, this SOS map is employed to inform the PACT reconstruction method. Additionally, the SOS map can provide structural information regarding tissue, which is complementary to the functional information from the PACT image. We propose a paradigm shift in the way that images are reconstructed in hybrid PACT-USCT imaging. Inspired by our observation that information about the SOS distribution is encoded in PACT measurements, we propose to jointly reconstruct the absorbed optical energy density and SOS distributions from a combined set of USCT and PACT measurements, thereby reducing the two reconstruction problems into one. This innovative approach has several advantages over conventional approaches in which PACT and USCT images are reconstructed independently: (1) Variations in the SOS will automatically be accounted for, optimizing PACT image quality; (2) The reconstructed PACT and USCT images will possess minimal systematic artifacts because errors in the imaging models will be optimally balanced during the joint reconstruction; (3) Due to the exploitation of information regarding the SOS distribution in the full-view PACT data, our approach will permit high-resolution reconstruction of the SOS distribution from sparse array data.

  15. Preparing Rapid, Accurate Construction Cost Estimates with a Personal Computer.

    ERIC Educational Resources Information Center

    Gerstel, Sanford M.

    1986-01-01

    An inexpensive and rapid method for preparing accurate cost estimates of construction projects in a university setting, using a personal computer, purchased software, and one estimator, is described. The case against defined estimates, the rapid estimating system, and adjusting standard unit costs are discussed. (MLW)

  16. Efficient and accurate computation of the incomplete Airy functions

    NASA Technical Reports Server (NTRS)

    Constantinides, E. D.; Marhefka, R. J.

    1993-01-01

    The incomplete Airy integrals serve as canonical functions for the uniform ray optical solutions to several high-frequency scattering and diffraction problems that involve a class of integrals characterized by two stationary points that are arbitrarily close to one another or to an integration endpoint. Integrals with such analytical properties describe transition region phenomena associated with composite shadow boundaries. An efficient and accurate method for computing the incomplete Airy functions would make the solutions to such problems useful for engineering purposes. In this paper a convergent series solution for the incomplete Airy functions is derived. Asymptotic expansions involving several terms are also developed and serve as large argument approximations. The combination of the series solution with the asymptotic formulae provides for an efficient and accurate computation of the incomplete Airy functions. Validation of accuracy is accomplished using direct numerical integration data.

  17. Accurate and fast computation of transmission cross coefficients

    NASA Astrophysics Data System (ADS)

    Apostol, Štefan; Hurley, Paul; Ionescu, Radu-Cristian

    2015-03-01

    Precise and fast computation of aerial images are essential. Typical lithographic simulators employ a Köhler illumination system for which aerial imagery is obtained using a large number of Transmission Cross Coefficients (TCCs). These are generally computed by a slow numerical evaluation of a double integral. We review the general framework in which the 2D imagery is solved and then propose a fast and accurate method to obtain the TCCs. We acquire analytical solutions and thus avoid the complexity-accuracy trade-off encountered with numerical integration. Compared to other analytical integration methods, the one presented is faster, more general and more tractable.

  18. An Accurate and Efficient Method of Computing Differential Seismograms

    NASA Astrophysics Data System (ADS)

    Hu, S.; Zhu, L.

    2013-12-01

    Inversion of seismic waveforms for Earth structure usually requires computing partial derivatives of seismograms with respect to velocity model parameters. We developed an accurate and efficient method to calculate differential seismograms for multi-layered elastic media, based on the Thompson-Haskell propagator matrix technique. We first derived the partial derivatives of the Haskell matrix and its compound matrix respect to the layer parameters (P wave velocity, shear wave velocity and density). We then derived the partial derivatives of surface displacement kernels in the frequency-wavenumber domain. The differential seismograms are obtained by using the frequency-wavenumber double integration method. The implementation is computationally efficient and the total computing time is proportional to the time of computing the seismogram itself, i.e., independent of the number of layers in the model. We verified the correctness of results by comparing with differential seismograms computed using the finite differences method. Our results are more accurate because of the analytical nature of the derived partial derivatives.

  19. Computational Time-Accurate Body Movement: Methodology, Validation, and Application

    DTIC Science & Technology

    1995-10-01

    used that had a leading-edge sweep angle of 45 deg and a NACA 64A010 symmetrical airfoil section. A cross section of the pylon is a symmetrical...25 2. Information Flow for the Time-Accurate Store Trajectory Prediction Process . . . . . . . . . 26 3. Pitch Rates for NACA -0012 Airfoil...section are comparisons of the computational results to data for a NACA -0012 airfoil following a predefined pitching motion. Validation of the

  20. Accurate Langevin approaches to simulate Markovian channel dynamics

    NASA Astrophysics Data System (ADS)

    Huang, Yandong; Rüdiger, Sten; Shuai, Jianwei

    2015-12-01

    The stochasticity of ion-channels dynamic is significant for physiological processes on neuronal cell membranes. Microscopic simulations of the ion-channel gating with Markov chains can be considered to be an accurate standard. However, such Markovian simulations are computationally demanding for membrane areas of physiologically relevant sizes, which makes the noise-approximating or Langevin equation methods advantageous in many cases. In this review, we discuss the Langevin-like approaches, including the channel-based and simplified subunit-based stochastic differential equations proposed by Fox and Lu, and the effective Langevin approaches in which colored noise is added to deterministic differential equations. In the framework of Fox and Lu’s classical models, several variants of numerical algorithms, which have been recently developed to improve accuracy as well as efficiency, are also discussed. Through the comparison of different simulation algorithms of ion-channel noise with the standard Markovian simulation, we aim to reveal the extent to which the existing Langevin-like methods approximate results using Markovian methods. Open questions for future studies are also discussed.

  1. Computation of edge diffraction for more accurate room acoustics auralization.

    PubMed

    Torres, R R; Svensson, U P; Kleiner, M

    2001-02-01

    Inaccuracies in computation and auralization of room impulse responses are related in part to inadequate modeling of edge diffraction, i.e., the scattering from edges of finite surfaces. A validated time-domain model (based on analytical extensions to the Biot-Tolstoy-Medwin technique) is thus employed here to compute early room impulse responses with edge diffraction. Furthermore, the computations are extended to include combinations of specular and diffracted paths in the example problem of a stage-house. These combinations constitute a significant component of the total nonspecular scattering and also help to identify edge diffraction in measured impulse responses. The computed impulse responses are then convolved with anechoic signals with a variety of time-frequency characteristics. Initial listening tests with varying orders and combinations of diffraction suggest that (1) depending on the input signal, the diffraction contributions can be clearly audible even in nonshadow zones for this conservative open geometry and (2) second-order diffraction to nonshadowed receivers can often be neglected. Finally, a practical implementation for binaural simulation is proposed, based on the singular behavior of edge diffraction along the least-time path for a given source-edge-receiver orientation. This study thus provides a first major step toward computing edge diffraction for more accurate room acoustics auralization.

  2. A Fast and accurate algorithm for computing tensor CBR anisotropy

    SciTech Connect

    Turner, Michael S.; Wang, Yun

    1995-12-01

    Inflation gives rise to a nearly scale-invariant spectrum of tensor perturbations (gravitational waves), their contribution to the Cosmic Background Radiation (CBR) anisotropy depends upon the present cosmological parameters as well as inflationary parameters. The analysis of a sampling-variance-limited CBR map offers the most promising means of detecting tensor perturbations, but will require evaluation of the predicted multipole spectrum for a very large number of cosmological parameter sets. We present accurate polynomial formulae for computing the predicted variance of the multipole moments in terms of the cosmological parameters \\Omega_\\Lambda, \\Omega_0h^2, \\Omega_B h^2, N_{\

  3. Neutron supermirrors: an accurate theory for layer thickness computation

    NASA Astrophysics Data System (ADS)

    Bray, Michael

    2001-11-01

    We present a new theory for the computation of Super-Mirror stacks, using accurate formulas derived from the classical optics field. Approximations are introduced into the computation, but at a later stage than existing theories, providing a more rigorous treatment of the problem. The final result is a continuous thickness stack, whose properties can be determined at the outset of the design. We find that the well-known fourth power dependence of number of layers versus maximum angle is (of course) asymptotically correct. We find a formula giving directly the relation between desired reflectance, maximum angle, and number of layers (for a given pair of materials). Note: The author of this article, a classical opticist, has limited knowledge of the Neutron world, and begs forgiveness for any shortcomings, erroneous assumptions and/or misinterpretation of previous authors' work on the subject.

  4. Accurate Computation of Survival Statistics in Genome-Wide Studies

    PubMed Central

    Vandin, Fabio; Papoutsaki, Alexandra; Raphael, Benjamin J.; Upfal, Eli

    2015-01-01

    A key challenge in genomics is to identify genetic variants that distinguish patients with different survival time following diagnosis or treatment. While the log-rank test is widely used for this purpose, nearly all implementations of the log-rank test rely on an asymptotic approximation that is not appropriate in many genomics applications. This is because: the two populations determined by a genetic variant may have very different sizes; and the evaluation of many possible variants demands highly accurate computation of very small p-values. We demonstrate this problem for cancer genomics data where the standard log-rank test leads to many false positive associations between somatic mutations and survival time. We develop and analyze a novel algorithm, Exact Log-rank Test (ExaLT), that accurately computes the p-value of the log-rank statistic under an exact distribution that is appropriate for any size populations. We demonstrate the advantages of ExaLT on data from published cancer genomics studies, finding significant differences from the reported p-values. We analyze somatic mutations in six cancer types from The Cancer Genome Atlas (TCGA), finding mutations with known association to survival as well as several novel associations. In contrast, standard implementations of the log-rank test report dozens-hundreds of likely false positive associations as more significant than these known associations. PMID:25950620

  5. A heuristic method to compute more accurate TM-scores

    NASA Astrophysics Data System (ADS)

    Li, Shuai Guo; Lim, Yun Kai; Ng, Yen Kaow

    2017-04-01

    Many scoring functions have been proposed to evaluate the similarity between protein structure models. Among these, a popular measure is the template modeling score (TM-score), introduced by Zhang and Skolnick. At this moment, the TM-score is calculated through a heuristic algorithm with no accuracy guarantee. In this paper, we propose an algorithm which computes more accurate TM-scores, through the use of the very fast Kabsch algorithm-which is commonly used to compute the Root Mean Square Deviation (RMSD). Our algorithm first obtain an approximation for the superposition of the protein models that optimizes the TM-score (for example, through OptGDT). Then, it iteratively refines on this superposition through the rotation axes discovered using the Kabsch algorithm. The algorithm is implemented in C++ into a tool that runs in time comparable to Zhang and Skolnick's TM-score software, but consistently produces TM-scores that are more accurate. The tool can be downloaded from https://github.com/kalngyk/tm2.

  6. Direct computation of parameters for accurate polarizable force fields

    SciTech Connect

    Verstraelen, Toon Vandenbrande, Steven; Ayers, Paul W.

    2014-11-21

    We present an improved electronic linear response model to incorporate polarization and charge-transfer effects in polarizable force fields. This model is a generalization of the Atom-Condensed Kohn-Sham Density Functional Theory (DFT), approximated to second order (ACKS2): it can now be defined with any underlying variational theory (next to KS-DFT) and it can include atomic multipoles and off-center basis functions. Parameters in this model are computed efficiently as expectation values of an electronic wavefunction, obviating the need for their calibration, regularization, and manual tuning. In the limit of a complete density and potential basis set in the ACKS2 model, the linear response properties of the underlying theory for a given molecular geometry are reproduced exactly. A numerical validation with a test set of 110 molecules shows that very accurate models can already be obtained with fluctuating charges and dipoles. These features greatly facilitate the development of polarizable force fields.

  7. An Accurate and Dynamic Computer Graphics Muscle Model

    NASA Technical Reports Server (NTRS)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  8. Fast and accurate algorithm for computing tensor CBR anisotropy

    SciTech Connect

    Turner, M.S.; Wang, Y.

    1996-05-01

    Inflation gives rise to a nearly scale-invariant spectrum of tensor perturbations (gravitational waves); their contribution to the cosmic background radiation (CBR) anisotropy depends upon the present cosmological parameters as well as inflationary parameters. The analysis of a sampling-variance-limited CBR map offers the most promising means of detecting tensor perturbations, but will require evaluation of the predicted multipole spectrum for a very large number of cosmological parameter sets. We present accurate polynomial formulas for computing the predicted variance of the multipole moments in terms of the cosmological parameters {Omega}{sub {Lambda}}, {Omega}{sub 0}{ital h}{sup 2}, {Omega}{sub {ital Bh}}{sup 2}, {ital N}{sub {nu}}, and the power-law index {ital n}{sub {ital T}} which are accurate to about 1{percent} for {ital l}{le}50 and to better than 3{percent} for 50{lt}{ital l}{le}100 (as compared to the numerical results of a Boltzmann code). {copyright} {ital 1996 The American Physical Society.}

  9. Accurate computation of Zernike moments in polar coordinates.

    PubMed

    Xin, Yongqing; Pawlak, Miroslaw; Liao, Simon

    2007-02-01

    An algorithm for high-precision numerical computation of Zernike moments is presented. The algorithm, based on the introduced polar pixel tiling scheme, does not exhibit the geometric error and numerical integration error which are inherent in conventional methods based on Cartesian coordinates. This yields a dramatic improvement of the Zernike moments accuracy in terms of their reconstruction and invariance properties. The introduced image tiling requires an interpolation algorithm which turns out to be of the second order importance compared to the discretization error. Various comparisons are made between the accuracy of the proposed method and that of commonly used techniques. The results reveal the great advantage of our approach.

  10. Towards a scalable and accurate quantum approach for describing vibrations of molecule–metal interfaces

    PubMed Central

    Madebene, Bruno; Ulusoy, Inga; Mancera, Luis; Scribano, Yohann; Chulkov, Sergey

    2011-01-01

    Summary We present a theoretical framework for the computation of anharmonic vibrational frequencies for large systems, with a particular focus on determining adsorbate frequencies from first principles. We give a detailed account of our local implementation of the vibrational self-consistent field approach and its correlation corrections. We show that our approach is both robust, accurate and can be easily deployed on computational grids in order to provide an efficient computational tool. We also present results on the vibrational spectrum of hydrogen fluoride on pyrene, on the thiophene molecule in the gas phase, and on small neutral gold clusters. PMID:22003450

  11. Fully computed holographic stereogram based algorithm for computer-generated holograms with accurate depth cues.

    PubMed

    Zhang, Hao; Zhao, Yan; Cao, Liangcai; Jin, Guofan

    2015-02-23

    We propose an algorithm based on fully computed holographic stereogram for calculating full-parallax computer-generated holograms (CGHs) with accurate depth cues. The proposed method integrates point source algorithm and holographic stereogram based algorithm to reconstruct the three-dimensional (3D) scenes. Precise accommodation cue and occlusion effect can be created, and computer graphics rendering techniques can be employed in the CGH generation to enhance the image fidelity. Optical experiments have been performed using a spatial light modulator (SLM) and a fabricated high-resolution hologram, the results show that our proposed algorithm can perform quality reconstructions of 3D scenes with arbitrary depth information.

  12. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  13. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  14. Computational approaches to computational aero-acoustics

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    The various techniques by which the goal of computational aeroacoustics (the calculation and noise prediction of a fluctuating fluid flow) may be achieved are reviewed. The governing equations for compressible fluid flow are presented. The direct numerical simulation approach is shown to be computationally intensive for high Reynolds number viscous flows. Therefore, other approaches, such as the acoustic analogy, vortex models and various perturbation techniques that aim to break the analysis into a viscous part and an acoustic part are presented. The choice of the approach is shown to be problem dependent.

  15. Accurate Anharmonic IR Spectra from Integrated Cc/dft Approach

    NASA Astrophysics Data System (ADS)

    Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Carnimeo, Ivan; Puzzarini, Cristina

    2014-06-01

    The recent implementation of the computation of infrared (IR) intensities beyond the double harmonic approximation [1] paved the route to routine calculations of infrared spectra for a wide set of molecular systems. Contrary to common beliefs, second-order perturbation theory is able to deliver results of high accuracy provided that anharmonic resonances are properly managed [1,2]. It has been already shown for several small closed- and open shell molecular systems that the differences between coupled cluster (CC) and DFT anharmonic wavenumbers are mainly due to the harmonic terms, paving the route to introduce effective yet accurate hybrid CC/DFT schemes [2]. In this work we present that hybrid CC/DFT models can be applied also to the IR intensities leading to the simulation of highly accurate fully anharmonic IR spectra for medium-size molecules, including ones of atmospheric interest, showing in all cases good agreement with experiment even in the spectral ranges where non-fundamental transitions are predominant[3]. [1] J. Bloino and V. Barone, J. Chem. Phys. 136, 124108 (2012) [2] V. Barone, M. Biczysko, J. Bloino, Phys. Chem. Chem. Phys., 16, 1759-1787 (2014) [3] I. Carnimeo, C. Puzzarini, N. Tasinato, P. Stoppa, A. P. Charmet, M. Biczysko, C. Cappelli and V. Barone, J. Chem. Phys., 139, 074310 (2013)

  16. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  17. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  18. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    PubMed

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples.

  19. Towards accurate quantum simulations of large systems with small computers

    NASA Astrophysics Data System (ADS)

    Yang, Yonggang

    2017-01-01

    Numerical simulations are important for many systems. In particular, various standard computer programs have been developed for solving the quantum Schrödinger equations. However, the accuracy of these calculations is limited by computer capabilities. In this work, an iterative method is introduced to enhance the accuracy of these numerical calculations, which is otherwise prohibitive by conventional methods. The method is easily implementable and general for many systems.

  20. Towards accurate quantum simulations of large systems with small computers.

    PubMed

    Yang, Yonggang

    2017-01-24

    Numerical simulations are important for many systems. In particular, various standard computer programs have been developed for solving the quantum Schrödinger equations. However, the accuracy of these calculations is limited by computer capabilities. In this work, an iterative method is introduced to enhance the accuracy of these numerical calculations, which is otherwise prohibitive by conventional methods. The method is easily implementable and general for many systems.

  1. Towards accurate quantum simulations of large systems with small computers

    PubMed Central

    Yang, Yonggang

    2017-01-01

    Numerical simulations are important for many systems. In particular, various standard computer programs have been developed for solving the quantum Schrödinger equations. However, the accuracy of these calculations is limited by computer capabilities. In this work, an iterative method is introduced to enhance the accuracy of these numerical calculations, which is otherwise prohibitive by conventional methods. The method is easily implementable and general for many systems. PMID:28117366

  2. Accurate molecular structure and spectroscopic properties for nucleobases: A combined computational - microwave investigation of 2-thiouracil as a case study

    PubMed Central

    Puzzarini, Cristina; Biczysko, Malgorzata; Barone, Vincenzo; Peña, Isabel; Cabezas, Carlos; Alonso, José L.

    2015-01-01

    The computational composite scheme purposely set up for accurately describing the electronic structure and spectroscopic properties of small biomolecules has been applied to the first study of the rotational spectrum of 2-thiouracil. The experimental investigation was made possible thanks to the combination of the laser ablation technique with Fourier Transform Microwave spectrometers. The joint experimental – computational study allowed us to determine accurate molecular structure and spectroscopic properties for the title molecule, but more important, it demonstrates a reliable approach for the accurate investigation of isolated small biomolecules. PMID:24002739

  3. High-performance computing and networking as tools for accurate emission computed tomography reconstruction.

    PubMed

    Passeri, A; Formiconi, A R; De Cristofaro, M T; Pupi, A; Meldolesi, U

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64x64) slices could be reconstructed from a set of 90 (64x64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods.

  4. An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates

    PubMed Central

    Khan, Usman; Falconi, Christian

    2014-01-01

    Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214

  5. Computer-control system speeds accurate loading of unit trains

    SciTech Connect

    Chironis, N.P.

    1987-04-01

    A microprocessor weighing system that relies on load cells is the latest word on batch-loading of unit trains. The weigh bin discharge gate is closed by the operator at the end of the car loading. This signals the computer to take a weight-after-loading reading. Thus the computer will print only the net weight of coal actually delivered to the car. The values of net and gross weight for the car just loaded are then printed on a hard-copy terminal. The computer then automatically batches the proper amount of coal for the next car, and the above loading procedure is repeated for each car in the unit train.

  6. Accurate Computation of Divided Differences of the Exponential Function,

    DTIC Science & Technology

    1983-06-01

    compute A is by its Taylor series A = Aexp = e = Because of the special structure of Z,. there is an extremely elegant algo- rithm for the first row...of steps 2 and 3. In general we cannot avoid using a 2-dimensioned array to form Fe unless F has some special structure . 2.4.2. Back filling the... structure of Z, will be des- troyed by the reduction and therefore some modifications of the algorithm TS are needed. The work for the whole computation

  7. Special purpose hybrid transfinite elements and unified computational methodology for accurately predicting thermoelastic stress waves

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper represents an attempt to apply extensions of a hybrid transfinite element computational approach for accurately predicting thermoelastic stress waves. The applicability of the present formulations for capturing the thermal stress waves induced by boundary heating for the well known Danilovskaya problems is demonstrated. A unique feature of the proposed formulations for applicability to the Danilovskaya problem of thermal stress waves in elastic solids lies in the hybrid nature of the unified formulations and the development of special purpose transfinite elements in conjunction with the classical Galerkin techniques and transformation concepts. Numerical test cases validate the applicability and superior capability to capture the thermal stress waves induced due to boundary heating.

  8. Mapping methods for computationally efficient and accurate structural reliability

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1992-01-01

    Mapping methods are developed to improve the accuracy and efficiency of probabilistic structural analyses with coarse finite element meshes. The mapping methods consist of: (1) deterministic structural analyses with fine (convergent) finite element meshes, (2) probabilistic structural analyses with coarse finite element meshes, (3) the relationship between the probabilistic structural responses from the coarse and fine finite element meshes, and (4) a probabilistic mapping. The results show that the scatter of the probabilistic structural responses and structural reliability can be accurately predicted using a coarse finite element model with proper mapping methods. Therefore, large structures can be analyzed probabilistically using finite element methods.

  9. A Bayesian Approach for Fast and Accurate Gene Tree Reconstruction

    PubMed Central

    Rasmussen, Matthew D.; Kellis, Manolis

    2011-01-01

    Recent sequencing and computing advances have enabled phylogenetic analyses to expand to both entire genomes and large clades, thus requiring more efficient and accurate methods designed specifically for the phylogenomic context. Here, we present SPIMAP, an efficient Bayesian method for reconstructing gene trees in the presence of a known species tree. We observe many improvements in reconstruction accuracy, achieved by modeling multiple aspects of evolution, including gene duplication and loss (DL) rates, speciation times, and correlated substitution rate variation across both species and loci. We have implemented and applied this method on two clades of fully sequenced species, 12 Drosophila and 16 fungal genomes as well as simulated phylogenies and find dramatic improvements in reconstruction accuracy as compared with the most popular existing methods, including those that take the species tree into account. We find that reconstruction inaccuracies of traditional phylogenetic methods overestimate the number of DL events by as much as 2–3-fold, whereas our method achieves significantly higher accuracy. We feel that the results and methods presented here will have many important implications for future investigations of gene evolution. PMID:20660489

  10. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  11. Approaches for the accurate definition of geological time boundaries

    NASA Astrophysics Data System (ADS)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  12. Casing shoe depths accurately and quickly selected with computer assistance

    SciTech Connect

    Mattiello, D.; Piantanida, M.; Schenato, A.; Tomada, L. )

    1993-10-04

    A computer-aided support system for casing design and shoe depth selection improves the reliability of solutions, reduces total project time, and helps reduce costs. This system is part of ADIS (Advanced Drilling Information System), an integrated environment developed by three companies of the ENI group (Agip SpA, Enidata, and Saipem). The ADIS project focuses on the on site planning and control of drilling operations. The first version of the computer-aided support for casing design (Cascade) was experimentally introduced by Agip SpA in July 1991. After several modifications, the system was introduced to field operations in December 1991 and is now used in Agip's district locations and headquarters. The results from the validation process and practical uses indicated it has several pluses: the reliability of the casing shoe depths proposed by the system helps reduce the project errors and improve the economic feasibility of the proposed solutions; the system has helped spread the use of the best engineering practices concerning shoe depth selection and casing design; the Cascade system finds numerous solutions rapidly, thereby reducing project time compared to previous methods of casing design; the system finds or verifies solutions efficiently, allowing the engineer to analyze several alternatives simultaneously rather than to concentrate only on the analysis of a single solution; the system is flexible by means of a user-friendly integration with the other software packages in the ADIS project. The paper describes the design methodology, validation cases, shoe depths, casing design, hardware and software, and results.

  13. A Machine Learning Approach for Accurate Annotation of Noncoding RNAs

    PubMed Central

    Liu, Chunmei; Wang, Zhi

    2016-01-01

    Searching genomes to locate noncoding RNA genes with known secondary structure is an important problem in bioinformatics. In general, the secondary structure of a searched noncoding RNA is defined with a structure model constructed from the structural alignment of a set of sequences from its family. Computing the optimal alignment between a sequence and a structure model is the core part of an algorithm that can search genomes for noncoding RNAs. In practice, a single structure model may not be sufficient to capture all crucial features important for a noncoding RNA family. In this paper, we develop a novel machine learning approach that can efficiently search genomes for noncoding RNAs with high accuracy. During the search procedure, a sequence segment in the searched genome sequence is processed and a feature vector is extracted to represent it. Based on the feature vector, a classifier is used to determine whether the sequence segment is the searched ncRNA or not. Our testing results show that this approach is able to efficiently capture crucial features of a noncoding RNA family. Compared with existing search tools, it significantly improves the accuracy of genome annotation. PMID:26357266

  14. Macromolecular Entropy Can Be Accurately Computed from Force.

    PubMed

    Hensen, Ulf; Gräter, Frauke; Henchman, Richard H

    2014-11-11

    A method is presented to evaluate a molecule's entropy from the atomic forces calculated in a molecular dynamics simulation. Specifically, diagonalization of the mass-weighted force covariance matrix produces eigenvalues which in the harmonic approximation can be related to vibrational frequencies. The harmonic oscillator entropies of each vibrational mode may be summed to give the total entropy. The results for a series of hydrocarbons, dialanine and a β hairpin are found to agree much better with values derived from thermodynamic integration than results calculated using quasiharmonic analysis. Forces are found to follow a harmonic distribution more closely than coordinate displacements and better capture the underlying potential energy surface. The method's accuracy, simplicity, and computational similarity to quasiharmonic analysis, requiring as input force trajectories instead of coordinate trajectories, makes it readily applicable to a wide range of problems.

  15. Fractional labelmaps for computing accurate dose volume histograms

    NASA Astrophysics Data System (ADS)

    Sunderland, Kyle; Pinter, Csaba; Lasso, Andras; Fichtinger, Gabor

    2017-03-01

    PURPOSE: In radiation therapy treatment planning systems, structures are represented as parallel 2D contours. For treatment planning algorithms, structures must be converted into labelmap (i.e. 3D image denoting structure inside/outside) representations. This is often done by triangulated a surface from contours, which is converted into a binary labelmap. This surface to binary labelmap conversion can cause large errors in small structures. Binary labelmaps are often represented using one byte per voxel, meaning a large amount of memory is unused. Our goal is to develop a fractional labelmap representation containing non-binary values, allowing more information to be stored in the same amount of memory. METHODS: We implemented an algorithm in 3D Slicer, which converts surfaces to fractional labelmaps by creating 216 binary labelmaps, changing the labelmap origin on each iteration. The binary labelmap values are summed to create the fractional labelmap. In addition, an algorithm is implemented in the SlicerRT toolkit that calculates dose volume histograms (DVH) using fractional labelmaps. RESULTS: We found that with manually segmented RANDO head and neck structures, fractional labelmaps represented structure volume up to 19.07% (average 6.81%) more accurately than binary labelmaps, while occupying the same amount of memory. When compared to baseline DVH from treatment planning software, DVH from fractional labelmaps had agreement acceptance percent (1% ΔD, 1% ΔV) up to 57.46% higher (average 4.33%) than DVH from binary labelmaps. CONCLUSION: Fractional labelmaps promise to be an effective method for structure representation, allowing considerably more information to be stored in the same amount of memory.

  16. An Approach for the Accurate Measurement of Social Morality Levels

    PubMed Central

    Liu, Haiyan; Chen, Xia; Zhang, Bo

    2013-01-01

    In the social sciences, computer-based modeling has become an increasingly important tool receiving widespread attention. However, the derivation of the quantitative relationships linking individual moral behavior and social morality levels, so as to provide a useful basis for social policy-making, remains a challenge in the scholarly literature today. A quantitative measurement of morality from the perspective of complexity science constitutes an innovative attempt. Based on the NetLogo platform, this article examines the effect of various factors on social morality levels, using agents modeling moral behavior, immoral behavior, and a range of environmental social resources. Threshold values for the various parameters are obtained through sensitivity analysis; and practical solutions are proposed for reversing declines in social morality levels. The results show that: (1) Population size may accelerate or impede the speed with which immoral behavior comes to determine the overall level of social morality, but it has no effect on the level of social morality itself; (2) The impact of rewards and punishment on social morality levels follows the “5∶1 rewards-to-punishment rule,” which is to say that 5 units of rewards have the same effect as 1 unit of punishment; (3) The abundance of public resources is inversely related to the level of social morality; (4) When the cost of population mobility reaches 10% of the total energy level, immoral behavior begins to be suppressed (i.e. the 1/10 moral cost rule). The research approach and methods presented in this paper successfully address the difficulties involved in measuring social morality levels, and promise extensive application potentials. PMID:24312189

  17. An approach for the accurate measurement of social morality levels.

    PubMed

    Liu, Haiyan; Chen, Xia; Zhang, Bo

    2013-01-01

    In the social sciences, computer-based modeling has become an increasingly important tool receiving widespread attention. However, the derivation of the quantitative relationships linking individual moral behavior and social morality levels, so as to provide a useful basis for social policy-making, remains a challenge in the scholarly literature today. A quantitative measurement of morality from the perspective of complexity science constitutes an innovative attempt. Based on the NetLogo platform, this article examines the effect of various factors on social morality levels, using agents modeling moral behavior, immoral behavior, and a range of environmental social resources. Threshold values for the various parameters are obtained through sensitivity analysis; and practical solutions are proposed for reversing declines in social morality levels. The results show that: (1) Population size may accelerate or impede the speed with which immoral behavior comes to determine the overall level of social morality, but it has no effect on the level of social morality itself; (2) The impact of rewards and punishment on social morality levels follows the "5∶1 rewards-to-punishment rule," which is to say that 5 units of rewards have the same effect as 1 unit of punishment; (3) The abundance of public resources is inversely related to the level of social morality; (4) When the cost of population mobility reaches 10% of the total energy level, immoral behavior begins to be suppressed (i.e. the 1/10 moral cost rule). The research approach and methods presented in this paper successfully address the difficulties involved in measuring social morality levels, and promise extensive application potentials.

  18. Computational Approaches to Interface Design

    NASA Technical Reports Server (NTRS)

    Corker; Lebacqz, J. Victor (Technical Monitor)

    1997-01-01

    Tools which make use of computational processes - mathematical, algorithmic and/or knowledge-based - to perform portions of the design, evaluation and/or construction of interfaces have become increasingly available and powerful. Nevertheless, there is little agreement as to the appropriate role for a computational tool to play in the interface design process. Current tools fall into broad classes depending on which portions, and how much, of the design process they automate. The purpose of this panel is to review and generalize about computational approaches developed to date, discuss the tasks which for which they are suited, and suggest methods to enhance their utility and acceptance. Panel participants represent a wide diversity of application domains and methodologies. This should provide for lively discussion about implementation approaches, accuracy of design decisions, acceptability of representational tradeoffs and the optimal role for a computational tool to play in the interface design process.

  19. CoMOGrad and PHOG: From Computer Vision to Fast and Accurate Protein Tertiary Structure Retrieval

    PubMed Central

    Karim, Rezaul; Aziz, Mohd. Momin Al; Shatabda, Swakkhar; Rahman, M. Sohel; Mia, Md. Abul Kashem; Zaman, Farhana; Rakin, Salman

    2015-01-01

    The number of entries in a structural database of proteins is increasing day by day. Methods for retrieving protein tertiary structures from such a large database have turn out to be the key to comparative analysis of structures that plays an important role to understand proteins and their functions. In this paper, we present fast and accurate methods for the retrieval of proteins having tertiary structures similar to a query protein from a large database. Our proposed methods borrow ideas from the field of computer vision. The speed and accuracy of our methods come from the two newly introduced features- the co-occurrence matrix of the oriented gradient and pyramid histogram of oriented gradient- and the use of Euclidean distance as the distance measure. Experimental results clearly indicate the superiority of our approach in both running time and accuracy. Our method is readily available for use from this website: http://research.buet.ac.bd:8080/Comograd/. PMID:26293226

  20. Groundtruth approach to accurate quantitation of fluorescence microarrays

    SciTech Connect

    Mascio-Kegelmeyer, L; Tomascik-Cheeseman, L; Burnett, M S; van Hummelen, P; Wyrobek, A J

    2000-12-01

    To more accurately measure fluorescent signals from microarrays, we calibrated our acquisition and analysis systems by using groundtruth samples comprised of known quantities of red and green gene-specific DNA probes hybridized to cDNA targets. We imaged the slides with a full-field, white light CCD imager and analyzed them with our custom analysis software. Here we compare, for multiple genes, results obtained with and without preprocessing (alignment, color crosstalk compensation, dark field subtraction, and integration time). We also evaluate the accuracy of various image processing and analysis techniques (background subtraction, segmentation, quantitation and normalization). This methodology calibrates and validates our system for accurate quantitative measurement of microarrays. Specifically, we show that preprocessing the images produces results significantly closer to the known ground-truth for these samples.

  1. Development of highly accurate approximate scheme for computing the charge transfer integral

    NASA Astrophysics Data System (ADS)

    Pershin, Anton; Szalay, Péter G.

    2015-08-01

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the "exact" scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the "exact" calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  2. Development of highly accurate approximate scheme for computing the charge transfer integral.

    PubMed

    Pershin, Anton; Szalay, Péter G

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the "exact" scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the "exact" calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  3. Development of highly accurate approximate scheme for computing the charge transfer integral

    SciTech Connect

    Pershin, Anton; Szalay, Péter G.

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  4. PERTURBATION APPROACH FOR QUANTUM COMPUTATION

    SciTech Connect

    G. P. BERMAN; D. I. KAMENEV; V. I. TSIFRINOVICH

    2001-04-01

    We discuss how to simulate errors in the implementation of simple quantum logic operations in a nuclear spin quantum computer with many qubits, using radio-frequency pulses. We verify our perturbation approach using the exact solutions for relatively small (L = 10) number of qubits.

  5. Parallel Higher-order Finite Element Method for Accurate Field Computations in Wakefield and PIC Simulations

    SciTech Connect

    Candel, A.; Kabel, A.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.; Ko, K.; /SLAC

    2009-06-19

    Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell) approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.

  6. RICO: A NEW APPROACH FOR FAST AND ACCURATE REPRESENTATION OF THE COSMOLOGICAL RECOMBINATION HISTORY

    SciTech Connect

    Fendt, W. A.; Wandelt, B. D.; Chluba, J.; Rubino-Martin, J. A. E-mail: jchluba@mpa-garching.mpg.de E-mail: bwandelt@illinois.edu

    2009-04-15

    We present RICO, a code designed to compute the ionization fraction of the universe during the epoch of hydrogen and helium recombination with an unprecedented combination of speed and accuracy. This is accomplished by training the machine learning code PICO on the calculations of a multilevel cosmological recombination code which self-consistently includes several physical processes that were neglected previously. After training, RICO is used to fit the free electron fraction as a function of the cosmological parameters. While, for example, at low redshifts (z {approx}< 900), much of the net change in the ionization fraction can be captured by lowering the hydrogen fudge factor in RECFAST by about 3%, RICO provides a means of effectively using the accurate ionization history of the full recombination code in the standard cosmological parameter estimation framework without the need to add new or refined fudge factors or functions to a simple recombination model. Within the new approach presented here, it is easy to update RICO whenever a more accurate full recombination code becomes available. Once trained, RICO computes the cosmological ionization history with negligible fitting error in {approx}10 ms, a speedup of at least 10{sup 6} over the full recombination code that was used here. Also RICO is able to reproduce the ionization history of the full code to a level well below 0.1%, thereby ensuring that the theoretical power spectra of cosmic microwave background (CMB) fluctuations can be computed to sufficient accuracy and speed for analysis from upcoming CMB experiments like Planck. Furthermore, it will enable cross-checking different recombination codes across cosmological parameter space, a comparison that will be very important in order to assure the accurate interpretation of future CMB data.

  7. The Clinical Impact of Accurate Cystine Calculi Characterization Using Dual-Energy Computed Tomography.

    PubMed

    Haley, William E; Ibrahim, El-Sayed H; Qu, Mingliang; Cernigliaro, Joseph G; Goldfarb, David S; McCollough, Cynthia H

    2015-01-01

    Dual-energy computed tomography (DECT) has recently been suggested as the imaging modality of choice for kidney stones due to its ability to provide information on stone composition. Standard postprocessing of the dual-energy images accurately identifies uric acid stones, but not other types. Cystine stones can be identified from DECT images when analyzed with advanced postprocessing. This case report describes clinical implications of accurate diagnosis of cystine stones using DECT.

  8. USI: a fast and accurate approach for conceptual document annotation.

    PubMed

    Fiorini, Nicolas; Ranwez, Sylvie; Montmain, Jacky; Ranwez, Vincent

    2015-03-14

    Semantic approaches such as concept-based information retrieval rely on a corpus in which resources are indexed by concepts belonging to a domain ontology. In order to keep such applications up-to-date, new entities need to be frequently annotated to enrich the corpus. However, this task is time-consuming and requires a high-level of expertise in both the domain and the related ontology. Different strategies have thus been proposed to ease this indexing process, each one taking advantage from the features of the document. In this paper we present USI (User-oriented Semantic Indexer), a fast and intuitive method for indexing tasks. We introduce a solution to suggest a conceptual annotation for new entities based on related already indexed documents. Our results, compared to those obtained by previous authors using the MeSH thesaurus and a dataset of biomedical papers, show that the method surpasses text-specific methods in terms of both quality and speed. Evaluations are done via usual metrics and semantic similarity. By only relying on neighbor documents, the User-oriented Semantic Indexer does not need a representative learning set. Yet, it provides better results than the other approaches by giving a consistent annotation scored with a global criterion - instead of one score per concept.

  9. Accurate phenotyping: Reconciling approaches through Bayesian model averaging

    PubMed Central

    Chen, Carla Chia-Ming; Mengersen, Kerrie Lee

    2017-01-01

    Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder—an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method. PMID:28423058

  10. Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images

    NASA Technical Reports Server (NTRS)

    Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.

    1999-01-01

    Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.

  11. Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images

    NASA Technical Reports Server (NTRS)

    Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.

    1999-01-01

    Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.

  12. Computing Highly Accurate Spectroscopic Line Lists that Cover a Large Temperature Range for Characterization of Exoplanet Atmospheres

    NASA Astrophysics Data System (ADS)

    Lee, T. J.; Huang, X.; Schwenke, D. W.

    2013-12-01

    Over the last decade, it has become apparent that the most effective approach for determining highly accurate rotational and rovibrational line lists for molecules of interest in planetary atmospheres is through a combination of high-resolution laboratory experiments coupled with state-of-the art ab initio quantum chemistry methods. The approach involves computing the most accurate potential energy surface (PES) possible using state-of-the art electronic structure methods, followed by computing rotational and rovibrational energy levels using an exact variational method to solve the nuclear Schrödinger equation. Then, reliable experimental data from high-resolution experiments is used to refine the ab initio PES in order to improve the accuracy of the computed energy levels and transition energies. From the refinement step, we have been able to achieve an accuracy of approximately 0.015 cm-1 for rovibrational transition energies, and even better for purely rotational transitions. This combined 'experiment / theory' approach allows for determination of essentially a complete line list, with hundreds of millions of transitions, and having the transition energies and intensities be highly accurate. Our group has successfully applied this approach to determine highly accurate line lists for NH3 and CO2 (and isotopologues), and very recently for SO2 and isotopologues. Here I will report our latest results for SO2 including all isotopologues. Comparisons to the available data in HITRAN2012 and other available databases will be shown, though we note that our line lists SO2 are significantly more complete than any other databases. Since it is important to span a large temperature range in order to model the spectral signature of exoplanets, we will also demonstrate how the spectra change on going from low temperatures (100 K) to higher temperatures (500 K).

  13. Efficient and accurate P-value computation for Position Weight Matrices.

    PubMed

    Touzet, Hélène; Varré, Jean-Stéphane

    2007-12-11

    Position Weight Matrices (PWMs) are probabilistic representations of signals in sequences. They are widely used to model approximate patterns in DNA or in protein sequences. The usage of PWMs needs as a prerequisite to knowing the statistical significance of a word according to its score. This is done by defining the P-value of a score, which is the probability that the background model can achieve a score larger than or equal to the observed value. This gives rise to the following problem: Given a P-value, find the corresponding score threshold. Existing methods rely on dynamic programming or probability generating functions. For many examples of PWMs, they fail to give accurate results in a reasonable amount of time. The contribution of this paper is two fold. First, we study the theoretical complexity of the problem, and we prove that it is NP-hard. Then, we describe a novel algorithm that solves the P-value problem efficiently. The main idea is to use a series of discretized score distributions that improves the final result step by step until some convergence criterion is met. Moreover, the algorithm is capable of calculating the exact P-value without any error, even for matrices with non-integer coefficient values. The same approach is also used to devise an accurate algorithm for the reverse problem: finding the P-value for a given score. Both methods are implemented in a software called TFM-PVALUE, that is freely available. We have tested TFM-PVALUE on a large set of PWMs representing transcription factor binding sites. Experimental results show that it achieves better performance in terms of computational time and precision than existing tools.

  14. Petascale self-consistent electromagnetic computations using scalable and accurate algorithms for complex structures

    NASA Astrophysics Data System (ADS)

    Cary, John R.; Abell, D.; Amundson, J.; Bruhwiler, D. L.; Busby, R.; Carlsson, J. A.; Dimitrov, D. A.; Kashdan, E.; Messmer, P.; Nieter, C.; Smithe, D. N.; Spentzouris, P.; Stoltz, P.; Trines, R. M.; Wang, H.; Werner, G. R.

    2006-09-01

    As the size and cost of particle accelerators escalate, high-performance computing plays an increasingly important role; optimization through accurate, detailed computermodeling increases performance and reduces costs. But consequently, computer simulations face enormous challenges. Early approximation methods, such as expansions in distance from the design orbit, were unable to supply detailed accurate results, such as in the computation of wake fields in complex cavities. Since the advent of message-passing supercomputers with thousands of processors, earlier approximations are no longer necessary, and it is now possible to compute wake fields, the effects of dampers, and self-consistent dynamics in cavities accurately. In this environment, the focus has shifted towards the development and implementation of algorithms that scale to large numbers of processors. So-called charge-conserving algorithms evolve the electromagnetic fields without the need for any global solves (which are difficult to scale up to many processors). Using cut-cell (or embedded) boundaries, these algorithms can simulate the fields in complex accelerator cavities with curved walls. New implicit algorithms, which are stable for any time-step, conserve charge as well, allowing faster simulation of structures with details small compared to the characteristic wavelength. These algorithmic and computational advances have been implemented in the VORPAL7 Framework, a flexible, object-oriented, massively parallel computational application that allows run-time assembly of algorithms and objects, thus composing an application on the fly.

  15. A hybrid approach for rapid, accurate, and direct kilovoltage radiation dose calculations in CT voxel space

    SciTech Connect

    Kouznetsov, Alexei; Tambasco, Mauro

    2011-03-15

    Purpose: To develop and validate a fast and accurate method that uses computed tomography (CT) voxel data to estimate absorbed radiation dose at a point of interest (POI) or series of POIs from a kilovoltage (kV) imaging procedure. Methods: The authors developed an approach that computes absorbed radiation dose at a POI by numerically evaluating the linear Boltzmann transport equation (LBTE) using a combination of deterministic and Monte Carlo (MC) techniques. This hybrid approach accounts for material heterogeneity with a level of accuracy comparable to the general MC algorithms. Also, the dose at a POI is computed within seconds using the Intel Core i7 CPU 920 2.67 GHz quad core architecture, and the calculations are performed using CT voxel data, making it flexible and feasible for clinical applications. To validate the method, the authors constructed and acquired a CT scan of a heterogeneous block phantom consisting of a succession of slab densities: Tissue (1.29 cm), bone (2.42 cm), lung (4.84 cm), bone (1.37 cm), and tissue (4.84 cm). Using the hybrid transport method, the authors computed the absorbed doses at a set of points along the central axis and x direction of the phantom for an isotropic 125 kVp photon spectral point source located along the central axis 92.7 cm above the phantom surface. The accuracy of the results was compared to those computed with MCNP, which was cross-validated with EGSnrc, and served as the benchmark for validation. Results: The error in the depth dose ranged from -1.45% to +1.39% with a mean and standard deviation of -0.12% and 0.66%, respectively. The error in the x profile ranged from -1.3% to +0.9%, with standard deviations of -0.3% and 0.5%, respectively. The number of photons required to achieve these results was 1x10{sup 6}. Conclusions: The voxel-based hybrid method evaluates the LBTE rapidly and accurately to estimate the absorbed x-ray dose at any POI or series of POIs from a kV imaging procedure.

  16. A hybrid approach for rapid, accurate, and direct kilovoltage radiation dose calculations in CT voxel space.

    PubMed

    Kouznetsov, Alexei; Tambasco, Mauro

    2011-03-01

    To develop and validate a fast and accurate method that uses computed tomography (CT) voxel data to estimate absorbed radiation dose at a point of interest (POI) or series of POIs from a kilovoltage (kV) imaging procedure. The authors developed an approach that computes absorbed radiation dose at a POI by numerically evaluating the linear Boltzmann transport equation (LBTE) using a combination of deterministic and Monte Carlo (MC) techniques. This hybrid approach accounts for material heterogeneity with a level of accuracy comparable to the general MC algorithms. Also, the dose at a POI is computed within seconds using the Intel Core i7 CPU 920 2.67 GHz quad core architecture, and the calculations are performed using CT voxel data, making it flexible and feasible for clinical applications. To validate the method, the authors constructed and acquired a CT scan of a heterogeneous block phantom consisting of a succession of slab densities: Tissue (1.29 cm), bone (2.42 cm), lung (4.84 cm), bone (1.37 cm), and tissue (4.84 cm). Using the hybrid transport method, the authors computed the absorbed doses at a set of points along the central axis and x direction of the phantom for an isotropic 125 kVp photon spectral point source located along the central axis 92.7 cm above the phantom surface. The accuracy of the results was compared to those computed with MCNP, which was cross-validated with EGSnrc, and served as the benchmark for validation. The error in the depth dose ranged from -1.45% to +1.39% with a mean and standard deviation of -0.12% and 0.66%, respectively. The error in the x profile ranged from -1.3% to +0.9%, with standard deviations of -0.3% and 0.5%, respectively. The number of photons required to achieve these results was 1 x 10(6). The voxel-based hybrid method evaluates the LBTE rapidly and accurately to estimate the absorbed x-ray dose at any POI or series of POIs from a kV imaging procedure.

  17. Computer Series, 101: Accurate Equations of State in Computational Chemistry Projects.

    ERIC Educational Resources Information Center

    Albee, David; Jones, Edward

    1989-01-01

    Discusses the use of computers in chemistry courses at the United States Military Academy. Provides two examples of computer projects: (1) equations of state, and (2) solving for molar volume. Presents BASIC and PASCAL listings for the second project. Lists 10 applications for physical chemistry. (MVL)

  18. Aeroacoustic Flow Phenomena Accurately Captured by New Computational Fluid Dynamics Method

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.

    2002-01-01

    One of the challenges in the computational fluid dynamics area is the accurate calculation of aeroacoustic phenomena, especially in the presence of shock waves. One such phenomenon is "transonic resonance," where an unsteady shock wave at the throat of a convergent-divergent nozzle results in the emission of acoustic tones. The space-time Conservation-Element and Solution-Element (CE/SE) method developed at the NASA Glenn Research Center can faithfully capture the shock waves, their unsteady motion, and the generated acoustic tones. The CE/SE method is a revolutionary new approach to the numerical modeling of physical phenomena where features with steep gradients (e.g., shock waves, phase transition, etc.) must coexist with those having weaker variations. The CE/SE method does not require the complex interpolation procedures (that allow for the possibility of a shock between grid cells) used by many other methods to transfer information between grid cells. These interpolation procedures can add too much numerical dissipation to the solution process. Thus, while shocks are resolved, weaker waves, such as acoustic waves, are washed out.

  19. Computational approaches for drug discovery.

    PubMed

    Hung, Che-Lun; Chen, Chi-Chun

    2014-09-01

    Cellular proteins are the mediators of multiple organism functions being involved in physiological mechanisms and disease. By discovering lead compounds that affect the function of target proteins, the target diseases or physiological mechanisms can be modulated. Based on knowledge of the ligand-receptor interaction, the chemical structures of leads can be modified to improve efficacy, selectivity and reduce side effects. One rational drug design technology, which enables drug discovery based on knowledge of target structures, functional properties and mechanisms, is computer-aided drug design (CADD). The application of CADD can be cost-effective using experiments to compare predicted and actual drug activity, the results from which can used iteratively to improve compound properties. The two major CADD-based approaches are structure-based drug design, where protein structures are required, and ligand-based drug design, where ligand and ligand activities can be used to design compounds interacting with the protein structure. Approaches in structure-based drug design include docking, de novo design, fragment-based drug discovery and structure-based pharmacophore modeling. Approaches in ligand-based drug design include quantitative structure-affinity relationship and pharmacophore modeling based on ligand properties. Based on whether the structure of the receptor and its interaction with the ligand are known, different design strategies can be seed. After lead compounds are generated, the rule of five can be used to assess whether these have drug-like properties. Several quality validation methods, such as cost function analysis, Fisher's cross-validation analysis and goodness of hit test, can be used to estimate the metrics of different drug design strategies. To further improve CADD performance, multi-computers and graphics processing units may be applied to reduce costs. © 2014 Wiley Periodicals, Inc.

  20. Computational Approaches to Vestibular Research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and

  1. Computational Approaches to Vestibular Research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and

  2. Fuzzy multiple linear regression: A computational approach

    NASA Technical Reports Server (NTRS)

    Juang, C. H.; Huang, X. H.; Fleming, J. W.

    1992-01-01

    This paper presents a new computational approach for performing fuzzy regression. In contrast to Bardossy's approach, the new approach, while dealing with fuzzy variables, closely follows the conventional regression technique. In this approach, treatment of fuzzy input is more 'computational' than 'symbolic.' The following sections first outline the formulation of the new approach, then deal with the implementation and computational scheme, and this is followed by examples to illustrate the new procedure.

  3. Computer-based personality judgments are more accurate than those made by humans

    PubMed Central

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-01

    Judging others’ personalities is an essential skill in successful social living, as personality is a key driver behind people’s interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants’ Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy. PMID:25583507

  4. Computer-based personality judgments are more accurate than those made by humans.

    PubMed

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-27

    Judging others' personalities is an essential skill in successful social living, as personality is a key driver behind people's interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants' Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy.

  5. Construction of feasible and accurate kinetic models of metabolism: A Bayesian approach.

    PubMed

    Saa, Pedro A; Nielsen, Lars K

    2016-07-15

    Kinetic models are essential to quantitatively understand and predict the behaviour of metabolic networks. Detailed and thermodynamically feasible kinetic models of metabolism are inherently difficult to formulate and fit. They have a large number of heterogeneous parameters, are non-linear and have complex interactions. Many powerful fitting strategies are ruled out by the intractability of the likelihood function. Here, we have developed a computational framework capable of fitting feasible and accurate kinetic models using Approximate Bayesian Computation. This framework readily supports advanced modelling features such as model selection and model-based experimental design. We illustrate this approach on the tightly-regulated mammalian methionine cycle. Sampling from the posterior distribution, the proposed framework generated thermodynamically feasible parameter samples that converged on the true values, and displayed remarkable prediction accuracy in several validation tests. Furthermore, a posteriori analysis of the parameter distributions enabled appraisal of the systems properties of the network (e.g., control structure) and key metabolic regulations. Finally, the framework was used to predict missing allosteric interactions.

  6. Construction of feasible and accurate kinetic models of metabolism: A Bayesian approach

    PubMed Central

    Saa, Pedro A.; Nielsen, Lars K.

    2016-01-01

    Kinetic models are essential to quantitatively understand and predict the behaviour of metabolic networks. Detailed and thermodynamically feasible kinetic models of metabolism are inherently difficult to formulate and fit. They have a large number of heterogeneous parameters, are non-linear and have complex interactions. Many powerful fitting strategies are ruled out by the intractability of the likelihood function. Here, we have developed a computational framework capable of fitting feasible and accurate kinetic models using Approximate Bayesian Computation. This framework readily supports advanced modelling features such as model selection and model-based experimental design. We illustrate this approach on the tightly-regulated mammalian methionine cycle. Sampling from the posterior distribution, the proposed framework generated thermodynamically feasible parameter samples that converged on the true values, and displayed remarkable prediction accuracy in several validation tests. Furthermore, a posteriori analysis of the parameter distributions enabled appraisal of the systems properties of the network (e.g., control structure) and key metabolic regulations. Finally, the framework was used to predict missing allosteric interactions. PMID:27417285

  7. Are accurate computations of the 13C' shielding feasible at the DFT level of theory?

    PubMed

    Vila, Jorge A; Arnautova, Yelena A; Martin, Osvaldo A; Scheraga, Harold A

    2014-02-05

    The goal of this study is twofold. First, to investigate the relative influence of the main structural factors affecting the computation of the (13)C' shielding, namely, the conformation of the residue itself and the next nearest-neighbor effects. Second, to determine whether calculation of the (13)C' shielding at the density functional level of theory (DFT), with an accuracy similar to that of the (13)C(α) shielding, is feasible with the existing computational resources. The DFT calculations, carried out for a large number of possible conformations of the tripeptide Ac-GXY-NMe, with different combinations of X and Y residues, enable us to conclude that the accurate computation of the (13)C' shielding for a given residue X depends on the: (i) (ϕ,ψ) backbone torsional angles of X; (ii) side-chain conformation of X; (iii) (ϕ,ψ) torsional angles of Y; and (iv) identity of residue Y. Consequently, DFT-based quantum mechanical calculations of the (13)C' shielding, with all these factors taken into account, are two orders of magnitude more CPU demanding than the computation, with similar accuracy, of the (13)C(α) shielding. Despite not considering the effect of the possible hydrogen bond interaction of the carbonyl oxygen, this work contributes to our general understanding of the main structural factors affecting the accurate computation of the (13)C' shielding in proteins and may spur significant progress in effort to develop new validation methods for protein structures.

  8. Improved patient size estimates for accurate dose calculations in abdomen computed tomography

    NASA Astrophysics Data System (ADS)

    Lee, Chang-Lae

    2017-07-01

    The radiation dose of CT (computed tomography) is generally represented by the CTDI (CT dose index). CTDI, however, does not accurately predict the actual patient doses for different human body sizes because it relies on a cylinder-shaped head (diameter : 16 cm) and body (diameter : 32 cm) phantom. The purpose of this study was to eliminate the drawbacks of the conventional CTDI and to provide more accurate radiation dose information. Projection radiographs were obtained from water cylinder phantoms of various sizes, and the sizes of the water cylinder phantoms were calculated and verified using attenuation profiles. The effective diameter was also calculated using the attenuation of the abdominal projection radiographs of 10 patients. When the results of the attenuation-based method and the geometry-based method shown were compared with the results of the reconstructed-axial-CT-image-based method, the effective diameter of the attenuation-based method was found to be similar to the effective diameter of the reconstructed-axial-CT-image-based method, with a difference of less than 3.8%, but the geometry-based method showed a difference of less than 11.4%. This paper proposes a new method of accurately computing the radiation dose of CT based on the patient sizes. This method computes and provides the exact patient dose before the CT scan, and can therefore be effectively used for imaging and dose control.

  9. Fast and accurate computation of system matrix for area integral model-based algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua

    2014-11-01

    Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.

  10. An accurate and efficient computation method of the hydration free energy of a large, complex molecule.

    PubMed

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-07

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of 〈UUV〉/2 (〈UUV〉 is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since 〈UUV〉 can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.

  11. Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method.

    PubMed

    Zhao, Yan; Cao, Liangcai; Zhang, Hao; Kong, Dezhao; Jin, Guofan

    2015-10-05

    Fast calculation and correct depth cue are crucial issues in the calculation of computer-generated hologram (CGH) for high quality three-dimensional (3-D) display. An angular-spectrum based algorithm for layer-oriented CGH is proposed. Angular spectra from each layer are synthesized as a layer-corresponded sub-hologram based on the fast Fourier transform without paraxial approximation. The proposed method can avoid the huge computational cost of the point-oriented method and yield accurate predictions of the whole diffracted field compared with other layer-oriented methods. CGHs of versatile formats of 3-D digital scenes, including computed tomography and 3-D digital models, are demonstrated with precise depth performance and advanced image quality.

  12. Time accurate application of the MacCormack 2-4 scheme on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Hudson, Dale A.; Long, Lyle N.

    1995-01-01

    Many recent computational efforts in turbulence and acoustics research have used higher order numerical algorithms. One popular method has been the explicit MacCormack 2-4 scheme. The MacCormack 2-4 scheme is second order accurate in time and fourth order accurate in space, and is stable for CFL's below 2/3. Current research has shown that the method can give accurate results but does exhibit significant Gibbs phenomena at sharp discontinuities. The impact of adding Jameson type second, third, and fourth order artificial viscosity was examined here. Category 2 problems, the nonlinear traveling wave and the Riemann problem, were computed using a CFL number of 0.25. This research has found that dispersion errors can be significantly reduced or nearly eliminated by using a combination of second and third order terms in the damping. Use of second and fourth order terms reduced the magnitude of dispersion errors but not as effectively as the second and third order combination. The program was coded using Thinking Machine's CM Fortran, a variant of Fortran 90/High Performance Fortran, and was executed on a 2K CM-200. Simple extrapolation boundary conditions were used for both problems.

  13. Time accurate application of the MacCormack 2-4 scheme on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Hudson, Dale A.; Long, Lyle N.

    1995-01-01

    Many recent computational efforts in turbulence and acoustics research have used higher order numerical algorithms. One popular method has been the explicit MacCormack 2-4 scheme. The MacCormack 2-4 scheme is second order accurate in time and fourth order accurate in space, and is stable for CFL's below 2/3. Current research has shown that the method can give accurate results but does exhibit significant Gibbs phenomena at sharp discontinuities. The impact of adding Jameson type second, third, and fourth order artificial viscosity was examined here. Category 2 problems, the nonlinear traveling wave and the Riemann problem, were computed using a CFL number of 0.25. This research has found that dispersion errors can be significantly reduced or nearly eliminated by using a combination of second and third order terms in the damping. Use of second and fourth order terms reduced the magnitude of dispersion errors but not as effectively as the second and third order combination. The program was coded using Thinking Machine's CM Fortran, a variant of Fortran 90/High Performance Fortran, and was executed on a 2K CM-200. Simple extrapolation boundary conditions were used for both problems.

  14. Novel electromagnetic surface integral equations for highly accurate computations of dielectric bodies with arbitrarily low contrasts

    SciTech Connect

    Erguel, Ozguer; Guerel, Levent

    2008-12-01

    We present a novel stabilization procedure for accurate surface formulations of electromagnetic scattering problems involving three-dimensional dielectric objects with arbitrarily low contrasts. Conventional surface integral equations provide inaccurate results for the scattered fields when the contrast of the object is low, i.e., when the electromagnetic material parameters of the scatterer and the host medium are close to each other. We propose a stabilization procedure involving the extraction of nonradiating currents and rearrangement of the right-hand side of the equations using fictitious incident fields. Then, only the radiating currents are solved to calculate the scattered fields accurately. This technique can easily be applied to the existing implementations of conventional formulations, it requires negligible extra computational cost, and it is also appropriate for the solution of large problems with the multilevel fast multipole algorithm. We show that the stabilization leads to robust formulations that are valid even for the solutions of extremely low-contrast objects.

  15. Accurate scatter correction for transmission computed tomography using an uncollimated line array source.

    PubMed

    Kojima, Akihiro; Matsumoto, Masanori; Tomiguchi, Seiji; Katsuda, Noboru; Yamashita, Yasuyuki; Motomura, Nobutoku

    2004-02-01

    We investigated scatter correction in transmission computed tomography (TCT) imaging by the combination of an uncollimated transmission source and a parallel-hole collimator. We employed the triple energy window (TEW) as the scatter correction and found that the conventional TEW method, which is accurate in emission computed tomography (ECT) imaging, needs some modification in TCT imaging based on our phantom studies. In this study a Tc-99m uncollimated line array source (area: 55 cm x 40 cm) was attached to one camera head of a dual-head gamma camera as a transmission source, and TCT data were acquired with a low-energy, general purpose (LEGP), parallel-hole collimator equipped on the other camera head. The energy spectra for 140 keV-photons transmitted through various attenuating material thicknesses were measured and analyzed for scatter fraction. The results of the energy spectra showed that the photons transmitted had an energy distribution that constructs a scatter peak within the 140 keV-photopeak energy window. In TCT imaging with a cylindrical water phantom, the conventional TEW method with triangle estimates (subtraction factor, K = 0.5) was not sufficient for accurate scatter correction (micro = 0.131 cm(-1) for water), whereas the modified TEW method with K = 1.0 gave the accurate attenuation coefficient of 0.153 cm(-1) for water. For the TCT imaging with the combination of the uncollimated Tc-99m line array source and parallel hole collimator, the modified TEW method with K = 1.0 gives the accurate TCT data for quantitative SPECT imaging in comparison with the conventional TEW method with K = 0.5.

  16. Multislice Computed Tomography Accurately Detects Stenosis in Coronary Artery Bypass Conduits

    PubMed Central

    Duran, Cihan; Sagbas, Ertan; Caynak, Baris; Sanisoglu, Ilhan; Akpinar, Belhhan; Gulbaran, Murat

    2007-01-01

    The aim of this study was to evaluate the accuracy of multislice computed tomography in detecting graft stenosis or occlusion after coronary artery bypass grafting, using coronary angiography as the standard. From January 2005 through May 2006, 25 patients (19 men and 6 women; mean age, 54 ± 11.3 years) underwent diagnostic investigation of their bypass grafts by multislice computed tomography within 1 month of coronary angiography. The mean time elapsed after coronary artery bypass grafting was 6.2 years. In these 25 patients, we examined 65 bypass conduits (24 arterial and 41 venous) and 171 graft segments (the shaft, proximal anastomosis, and distal anastomosis). Compared with coronary angiography, the segment-based sensitivity, specificity, and positive and negative predictive values of multislice computed tomography in the evaluation of stenosis were 89%, 100%, 100%, and 99%, respectively. The patency rate for multislice compu-ted tomography was 85% (55/65: 3 arterial and 7 venous grafts were occluded), with 100% sensitivity and specificity. From these data, we conclude that multislice computed tomography can accurately evaluate the patency and stenosis of bypass grafts during outpatient follow-up. PMID:17948078

  17. An accurate Fortran code for computing hydrogenic continuum wave functions at a wide range of parameters

    NASA Astrophysics Data System (ADS)

    Peng, Liang-You; Gong, Qihuang

    2010-12-01

    The accurate computations of hydrogenic continuum wave functions are very important in many branches of physics such as electron-atom collisions, cold atom physics, and atomic ionization in strong laser fields, etc. Although there already exist various algorithms and codes, most of them are only reliable in a certain ranges of parameters. In some practical applications, accurate continuum wave functions need to be calculated at extremely low energies, large radial distances and/or large angular momentum number. Here we provide such a code, which can generate accurate hydrogenic continuum wave functions and corresponding Coulomb phase shifts at a wide range of parameters. Without any essential restrict to angular momentum number, the present code is able to give reliable results at the electron energy range [10,10] eV for radial distances of [10,10] a.u. We also find the present code is very efficient, which should find numerous applications in many fields such as strong field physics. Program summaryProgram title: HContinuumGautchi Catalogue identifier: AEHD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1233 No. of bytes in distributed program, including test data, etc.: 7405 Distribution format: tar.gz Programming language: Fortran90 in fixed format Computer: AMD Processors Operating system: Linux RAM: 20 MBytes Classification: 2.7, 4.5 Nature of problem: The accurate computation of atomic continuum wave functions is very important in many research fields such as strong field physics and cold atom physics. Although there have already existed various algorithms and codes, most of them can only be applicable and reliable in a certain range of parameters. We present here an accurate FORTRAN program for

  18. New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation

    NASA Astrophysics Data System (ADS)

    Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Kim, Seonghoon; Sohn, Jason W.

    2015-02-01

    In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient’s 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ~40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry.

  19. New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation.

    PubMed

    Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Kim, Seonghoon; Sohn, Jason W

    2015-02-21

    In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient's 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ~40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry.

  20. IDSite: An accurate approach to predict P450-mediated drug metabolism

    PubMed Central

    Li, Jianing; Schneebeli, Severin T.; Bylund, Joseph; Farid, Ramy; Friesner, Richard A.

    2011-01-01

    Accurate prediction of drug metabolism is crucial for drug design. Since a large majority of drugs metabolism involves P450 enzymes, we herein describe a computational approach, IDSite, to predict P450-mediated drug metabolism. To model induced-fit effects, IDSite samples the conformational space with flexible docking in Glide followed by two refinement stages using the Protein Local Optimization Program (PLOP). Sites of metabolism (SOMs) are predicted according to a physical-based score that evaluates the potential of atoms to react with the catalytic iron center. As a preliminary test, we present in this paper the prediction of hydroxylation and O-dealkylation sites mediated by CYP2D6 using two different models: a physical-based simulation model, and a modification of this model in which a small number of parameters are fit to a training set. Without fitting any parameters to experimental data, the Physical IDSite scoring recovers 83% of the experimental observations for 56 compounds with a very low false positive rate. With only 4 fitted parameters, the Fitted IDSite was trained with the subset of 36 compounds and successfully applied to the other 20 compounds, recovering 94% of the experimental observations with high sensitivity and specificity for both sets. PMID:22247702

  1. Structural stability augmentation system design using BODEDIRECT: A quick and accurate approach

    NASA Technical Reports Server (NTRS)

    Goslin, T. J.; Ho, J. K.

    1989-01-01

    A methodology is presented for a modal suppression control law design using flight test data instead of mathematical models to obtain the required gain and phase information about the flexible airplane. This approach is referred to as BODEDIRECT. The purpose of the BODEDIRECT program is to provide a method of analyzing the modal phase relationships measured directly from the airplane. These measurements can be achieved with a frequency sweep at the control surface input while measuring the outputs of interest. The measured Bode-models can be used directly for analysis in the frequency domain, and for control law design. Besides providing a more accurate representation for the system inputs and outputs of interest, this method is quick and relatively inexpensive. To date, the BODEDIRECT program has been tested and verified for computational integrity. Its capabilities include calculation of series, parallel and loop closure connections between Bode-model representations. System PSD, together with gain and phase margins of stability may be calculated for successive loop closures of multi-input/multi-output systems. Current plans include extensive flight testing to obtain a Bode-model representation of a commercial aircraft for design of a structural stability augmentation system.

  2. Computer Algebra, Instrumentation and the Anthropological Approach

    ERIC Educational Resources Information Center

    Monaghan, John

    2007-01-01

    This article considers research and scholarship on the use of computer algebra in mathematics education following the instrumentation and the anthropological approaches. It outlines what these approaches are, positions them with regard to other approaches, examines tensions between the two approaches and makes suggestions for how work in this…

  3. Time-accurate Navier-Stokes computations of classical two-dimensional edge tone flow fields

    NASA Astrophysics Data System (ADS)

    Liu, B. L.; O'Farrell, J. M.; Jones, Jess H.

    1990-01-01

    Time-accurate Navier-Stokes computations were performed to study a Class II (acoustic) whistle, the edge tone, and gain knowledge of the vortex-acoustic coupling mechanisms driving production of these tones. Results were obtained by solving the full Navier-Stokes equations for laminar compressible air flow of a two-dimensional jet issuing from a slit interacting with a wedge. Cases considered were determined by varying the distance from the slit to the edge. Flow speed was kept constant at 1750 cm/sec as was the slit thickness of 0.1 cm, corresponding to conditions in the experiments of Brown. Excellent agreement was obtained in all four edge tone stage cases between the present computational results and the experimentally obtained results of Brown. Specific edge tone generated phenomena and further confirmation of certain theories concerning these phenomena were brought to light in this analytical simulation of edge tones.

  4. Computer-Based Training: An Institutional Approach.

    ERIC Educational Resources Information Center

    Barker, Philip; Manji, Karim

    1992-01-01

    Discussion of issues related to computer-assisted learning (CAL) and computer-based training (CBT) describes approaches to electronic learning; principles underlying courseware development to support these approaches; and a plan for creation of a CAL/CBT development center, including its functional role, campus services, staffing, and equipment…

  5. Suite of finite element algorithms for accurate computation of soft tissue deformation for surgical simulation

    PubMed Central

    Joldes, Grand Roman; Wittek, Adam; Miller, Karol

    2008-01-01

    Real time computation of soft tissue deformation is important for the use of augmented reality devices and for providing haptic feedback during operation or surgeon training. This requires algorithms that are fast, accurate and can handle material nonlinearities and large deformations. A set of such algorithms is presented in this paper, starting with the finite element formulation and the integration scheme used and addressing common problems such as hourglass control and locking. The computation examples presented prove that by using these algorithms, real time computations become possible without sacrificing the accuracy of the results. For a brain model having more than 7000 degrees of freedom, we computed the reaction forces due to indentation with frequency of around 1000 Hz using a standard dual core PC. Similarly, we conducted simulation of brain shift using a model with more than 50 000 degrees of freedom in less than a minute. The speed benefits of our models results from combining the Total Lagrangian formulation with explicit time integration and low order finite elements. PMID:19152791

  6. Integrative approaches to computational biomedicine

    PubMed Central

    Coveney, Peter V.; Diaz-Zuccarini, Vanessa; Graf, Norbert; Hunter, Peter; Kohl, Peter; Tegner, Jesper; Viceconti, Marco

    2013-01-01

    The new discipline of computational biomedicine is concerned with the application of computer-based techniques and particularly modelling and simulation to human health. Since 2007, this discipline has been synonymous, in Europe, with the name given to the European Union's ambitious investment in integrating these techniques with the eventual aim of modelling the human body as a whole: the virtual physiological human. This programme and its successors are expected, over the next decades, to transform the study and practice of healthcare, moving it towards the priorities known as ‘4P's’: predictive, preventative, personalized and participatory medicine.

  7. Computational modelling approaches to vaccinology.

    PubMed

    Pappalardo, Francesco; Flower, Darren; Russo, Giulia; Pennisi, Marzio; Motta, Santo

    2015-02-01

    Excepting the Peripheral and Central Nervous Systems, the Immune System is the most complex of somatic systems in higher animals. This complexity manifests itself at many levels from the molecular to that of the whole organism. Much insight into this confounding complexity can be gained through computational simulation. Such simulations range in application from epitope prediction through to the modelling of vaccination strategies. In this review, we evaluate selectively various key applications relevant to computational vaccinology: these include technique that operates at different scale that is, from molecular to organisms and even to population level.

  8. Beyond mean-field approximations for accurate and computationally efficient models of on-lattice chemical kinetics.

    PubMed

    Pineda, M; Stamatakis, M

    2017-07-14

    Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.

  9. Beyond mean-field approximations for accurate and computationally efficient models of on-lattice chemical kinetics

    NASA Astrophysics Data System (ADS)

    Pineda, M.; Stamatakis, M.

    2017-07-01

    Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.

  10. Computational approaches to motor control.

    PubMed

    Flash, T; Sejnowski, T J

    2001-12-01

    New concepts and computational models that integrate behavioral and neurophysiological observations have addressed several of the most fundamental long-standing problems in motor control. These problems include the selection of particular trajectories among the large number of possibilities, the solution of inverse kinematics and dynamics problems, motor adaptation and the learning of sequential behaviors.

  11. Computational approaches to motor control

    PubMed Central

    Flash, Tamar; Sejnowski, Terrence J

    2010-01-01

    New concepts and computational models that integrate behavioral and neurophysiological observations have addressed several of the most fundamental long-standing problems in motor control. These problems include the selection of particular trajectories among the large number of possibilities, the solution of inverse kinematics and dynamics problems, motor adaptation and the learning of sequential behaviors. PMID:11741014

  12. Accurate computation of reduction potentials of 4Fe-4S clusters indicates a carboxylate shift in Pyrococcus furiosus ferredoxin.

    PubMed

    Jensen, Kasper P; Ooi, Bee-Lean; Christensen, Hans E M

    2007-10-15

    This work describes the computation and accurate reproduction of subtle shifts in reduction potentials for two mutants of the iron-sulfur protein Pyrococcus furiosus ferredoxin. The computational models involved only first-sphere ligands and differed with respect to one ligand, either acetate (aspartate), thiolate (cysteine), or methoxide (serine). Standard procedures using vacuum optimization gave qualitatively wrong results and errors up to 0.07 V. Using electrostatically screened geometries and large basis sets for expanding the wave functions gave quantitatively correct results, with errors of only 0.03 V. Correspondingly, only this approach predicted a change in the coordination mode of aspartate (i.e., a carboxylate shift) accompanying the reduction of the wild-type cluster, confirming results from synthetic models and explaining why electrostatic screening is necessary. Hence, the carboxylate shift appears to occur in the proteins from which data were collected. The results represent the most accurate predictions of shifts in reduction potentials for modified proteins, the success in part being due to the similar nature of the three amino acid ligands involved. The predicted carboxylate shift is expected to tune aspartate's degree of electron donation to the cluster's two oxidation states, thus making the reversible redox reaction feasible.

  13. Simple but accurate GCM-free approach for quantifying anthropogenic climate change

    NASA Astrophysics Data System (ADS)

    Lovejoy, S.

    2014-12-01

    We are so used to analysing the climate with the help of giant computer models (GCM's) that it is easy to get the impression that they are indispensable. Yet anthropogenic warming is so large (roughly 0.9oC) that it turns out that it is straightforward to quantify it with more empirically based methodologies that can be readily understood by the layperson. The key is to use the CO2 forcing as a linear surrogate for all the anthropogenic effects from 1880 to the present (implicitly including all effects due to Greenhouse Gases, aerosols and land use changes). To a good approximation, double the economic activity, double the effects. The relationship between the forcing and global mean temperature is extremely linear as can be seen graphically and understood without fancy statistics, [Lovejoy, 2014a] (see the attached figure and http://www.physics.mcgill.ca/~gang/Lovejoy.htm). To an excellent approximation, the deviations from the linear forcing - temperature relation can be interpreted as the natural variability. For example, this direct - yet accurate approach makes it graphically obvious that the "pause" or "hiatus" in the warming since 1998 is simply a natural cooling event that has roughly offset the anthropogenic warming [Lovejoy, 2014b]. Rather than trying to prove that the warming is anthropogenic, with a little extra work (and some nonlinear geophysics theory and pre-industrial multiproxies) we can disprove the competing theory that it is natural. This approach leads to the estimate that the probability of the industrial scale warming being a giant natural fluctuation is ≈0.1%: it can be dismissed. This destroys the last climate skeptic argument - that the models are wrong and the warming is natural. It finally allows for a closure of the debate. In this talk we argue that this new, direct, simple, intuitive approach provides an indispensable tool for communicating - and convincing - the public of both the reality and the amplitude of anthropogenic warming

  14. Can emergency physicians accurately rule out clinically important cervical spine injuries by using computed tomography?

    PubMed

    Van Zyl, Hendrik P; Bilbey, James; Vukusic, Alan; Ring, Todd; Oakes, Jennifer; Williamson, Lykke D; Mitchell, Ian V

    2014-03-01

    Emergency physicians are expected to rule out clinically important cervical spine injuries using clinical skills and imaging. Our objective was to determine whether emergency physicians could accurately rule out clinically important cervical spine injuries using computed tomographic (CT) imaging of the cervical spine. Fifteen emergency physicians were enrolled to interpret a sample of 50 cervical spine CT scans in a nonclinical setting. The sample contained a 30% incidence of cervical spine injury. After a 2-hour review session, the participants interpreted the CT scans and categorized them into either a suspected cervical spine injury or no cervical spine injury. Participants were asked to specify the location and type of injury. The gold standard interpretation was the combined opinion of two staff radiologists. Emergency physicians correctly identified 182 of the 210 abnormal cases with cervical spine injury. The sensitivity of emergency physicians was 87% (95% confidence interval [CI] 82-91), and the specificity was 76% (95% CI 74-77). The negative likelihood ratio was 0.18 (95% CI 0.12-0.25). Experienced emergency physicians successfully identified a large proportion of cervical spine injuries on CT; however, they were not sufficiently sensitive to accurately exclude clinically important injuries. Emergency physicians should rely on a radiologist review of cervical spine CT scans prior to discontinuing cervical spine precautions.

  15. Accurate and efficient computation of nonlocal potentials based on Gaussian-sum approximation

    NASA Astrophysics Data System (ADS)

    Exl, Lukas; Mauser, Norbert J.; Zhang, Yong

    2016-12-01

    We introduce an accurate and efficient method for the numerical evaluation of nonlocal potentials, including the 3D/2D Coulomb, 2D Poisson and 3D dipole-dipole potentials. Our method is based on a Gaussian-sum approximation of the singular convolution kernel combined with a Taylor expansion of the density. Starting from the convolution formulation of the nonlocal potential, for smooth and fast decaying densities, we make a full use of the Fourier pseudospectral (plane wave) approximation of the density and a separable Gaussian-sum approximation of the kernel in an interval where the singularity (the origin) is excluded. The potential is separated into a regular integral and a near-field singular correction integral. The first is computed with the Fourier pseudospectral method, while the latter is well resolved utilizing a low-order Taylor expansion of the density. Both parts are accelerated by fast Fourier transforms (FFT). The method is accurate (14-16 digits), efficient (O (Nlog ⁡ N) complexity), low in storage, easily adaptable to other different kernels, applicable for anisotropic densities and highly parallelizable.

  16. Research on the rapid and accurate positioning and orientation approach for land missile-launching vehicle.

    PubMed

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-10-20

    Getting a land vehicle's accurate position, azimuth and attitude rapidly is significant for vehicle based weapons' combat effectiveness. In this paper, a new approach to acquire vehicle's accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle's accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm's iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system's working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min.

  17. Research on the Rapid and Accurate Positioning and Orientation Approach for Land Missile-Launching Vehicle

    PubMed Central

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-01-01

    Getting a land vehicle’s accurate position, azimuth and attitude rapidly is significant for vehicle based weapons’ combat effectiveness. In this paper, a new approach to acquire vehicle’s accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle’s accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm’s iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system’s working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min. PMID:26492249

  18. Enabling high grayscale resolution displays and accurate response time measurements on conventional computers.

    PubMed

    Li, Xiangrui; Lu, Zhong-Lin

    2012-02-29

    Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect

  19. Massively parallel computation of accurate densities for N-body dark matter simulations using the phase-space-element method

    NASA Astrophysics Data System (ADS)

    Kaehler, R.

    2017-07-01

    This paper presents an accurate density computation approach for large dark matter simulations, based on a recently introduced phase-space tessellation technique and designed for massively parallel, heterogeneous cluster architectures. We discuss a memory efficient construction of an oct-tree structure to sample the mass densities with locally adaptive resolution, according to the features of the underlying tetrahedral tessellation. We propose an efficient GPU implementation for the computationally intensive operation of intersecting the tetrahedra with the cubical cells of the deposit grid, that achieves a speedup of almost an order of magnitude compared to an optimized CPU version. We discuss two dynamic load balancing schemes - the first exchanges particle data between cluster nodes and deposits the tetrahedra for each block of the grid structure on single nodes, whereas the second approach uses global reduction operations to obtain the total masses. We demonstrate the scalability of our algorithms with up to 256 GPUs and TB-sized simulation snapshots, resulting in tessellations with more than 400 billion tetrahedra.

  20. Time-Accurate Computation of Viscous Flow Around Deforming Bodies Using Overset Grids

    SciTech Connect

    Fast, P; Henshaw, W D

    2001-04-02

    Dynamically evolving boundaries and deforming bodies interacting with a flow are commonly encountered in fluid dynamics. However, the numerical simulation of flows with dynamic boundaries is difficult with current methods. We propose a new method for studying such problems. The key idea is to use the overset grid method with a thin, body-fitted grid near the deforming boundary, while using fixed Cartesian grids to cover most of the computational domain. Our approach combines the strengths of earlier moving overset grid methods for rigid body motion, and unstructured grid methods for Aow-structure interactions. Large scale deformation of the flow boundaries can be handled without a global regridding, and in a computationally efficient way. In terms of computational cost, even a full overset grid regridding is significantly cheaper than a full regridding of an unstructured grid for the same domain, especially in three dimensions. Numerical studies are used to verify accuracy and convergence of our flow solver. As a computational example, we consider two-dimensional incompressible flow past a flexible filament with prescribed dynamics.

  1. Angpow: a software for the fast computation of accurate tomographic power spectra

    NASA Astrophysics Data System (ADS)

    Campagne, J.-E.; Neveu, J.; Plaszczynski, S.

    2017-06-01

    Aims: The statistical distribution of galaxies is a powerful probe to constrain cosmological models and gravity. In particular, the matter power spectrum P(k) provides information about the cosmological distance evolution and the galaxy clustering. However the building of P(k) from galaxy catalogs requires a cosmological model to convert angles on the sky and redshifts into distances, which leads to difficulties when comparing data with predicted P(k) from other cosmological models, and for photometric surveys like the Large Synoptic Survey Telescope (LSST). The angular power spectrum Cℓ(z1,z2) between two bins located at redshift z1 and z2 contains the same information as the matter power spectrum, and is free from any cosmological assumption, but the prediction of Cℓ(z1,z2) from P(k) is a costly computation when performed precisely. Methods: The Angpow software aims at quickly and accurately computing the auto (z1 = z2) and cross (z1 ≠ z2) angular power spectra between redshift bins. We describe the developed algorithm based on developments on the Chebyshev polynomial basis and on the Clenshaw-Curtis quadrature method. We validate the results with other codes, and benchmark the performance. Results: Angpow is flexible and can handle any user-defined power spectra, transfer functions, and redshift selection windows. The code is fast enough to be embedded inside programs exploring large cosmological parameter spaces through the Cℓ(z1,z2) comparison with data. We emphasize that the Limber's approximation, often used to speed up the computation, gives incorrect Cℓ values for cross-correlations. The C++ code is available from http://https://gitlab.in2p3.fr/campagne/AngPow

  2. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    DOE PAGES

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less

  3. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    SciTech Connect

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, and computational cost is examined and several numerical examples are presented to corroborate the findings.

  4. [An accurate approach to hyperspectral mineral identification based on naive bayesian classification model].

    PubMed

    He, Jin-Xin; Chen, Sheng-Bo; Wang, Yang; Wu, Yan-Fan

    2014-02-01

    The spectral absorption features are very similar between some minerals, especially hydrothermal alteration minerals related to mineralization, and they are also influenced by other factors such as spectral mixture. As a result, many of the spectral identification approaches for the minerals with similar spectral absorption features are prone to confusion and misjudgment. Therefore, to solve the phenomenon of "same mineral has different spectrums, and same spectrum belongs to different minerals", this paper proposes an accurate approach to hyperspectral mineral identification based on naive bayesian classification model. By testing and analyzing muscovite and kaolinite, the two typical alteration minerals, and comparing this approach with spectral angle matching, binary encoding and spectral feature fitting, the three popular spectral identification approaches, the results show that this approach can make more obvious differences among different minerals having similar spectrums, and has higher classification accuracy, since it is based on the position of absorption feature, absorption depth and the slope of continuum.

  5. Application of the accurate mass and time tag approach in studies of the human blood lipidome

    PubMed Central

    Ding, Jie; Sorensen, Christina M.; Jaitly, Navdeep; Jiang, Hongliang; Orton, Daniel J.; Monroe, Matthew E.; Moore, Ronald J.; Smith, Richard D.; Metz, Thomas O.

    2008-01-01

    We report a preliminary demonstration of the accurate mass and time (AMT) tag approach for lipidomics. Initial data-dependent LC-MS/MS analyses of human plasma, erythrocyte, and lymphocyte lipids were performed in order to identify lipid molecular species in conjunction with complementary accurate mass and isotopic distribution information. Identified lipids were used to populate initial lipid AMT tag databases containing 250 and 45 entries for those species detected in positive and negative electrospray ionization (ESI) modes, respectively. The positive ESI database was then utilized to identify human plasma, erythrocyte, and lymphocyte lipids in high-throughput LC-MS analyses based on the AMT tag approach. We were able to define the lipid profiles of human plasma, erythrocytes, and lymphocytes based on qualitative and quantitative differences in lipid abundance. PMID:18502191

  6. Application of the accurate mass and time tag approach in studies of the human blood lipidome

    SciTech Connect

    Ding, Jie; Sorensen, Christina M.; Jaitly, Navdeep; Jiang, Hongliang; Orton, Daniel J.; Monroe, Matthew E.; Moore, Ronald J.; Smith, Richard D.; Metz, Thomas O.

    2008-08-15

    We report a preliminary demonstration of the accurate mass and time (AMT) tag approach for lipidomics. Initial data-dependent LC-MS/MS analyses of human plasma, erythrocyte, and lymphocyte lipids were performed in order to identify lipid molecular species in conjunction with complementary accurate mass and isotopic distribution information. Identified lipids were used to populate initial lipid AMT tag databases containing 250 and 45 entries for those species detected in positive and negative electrospray ionization (ESI) modes, respectively. The positive ESI database was then utilized to identify human plasma, erythrocyte, and lymphocyte lipids in high-throughput quantitative LC-MS analyses based on the AMT tag approach. We were able to define the lipid profiles of human plasma, erythrocytes, and lymphocytes based on qualitative and quantitative differences in lipid abundance. In addition, we also report on the optimization of a reversed-phase LC method for the separation of lipids in these sample types.

  7. Finding accurate frontiers: A knowledge-intensive approach to relational learning

    NASA Technical Reports Server (NTRS)

    Pazzani, Michael; Brunk, Clifford

    1994-01-01

    An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory.

  8. When do perturbative approaches accurately capture the dynamics of complex quantum systems?

    PubMed Central

    Fruchtman, Amir; Lambert, Neill; Gauger, Erik M.

    2016-01-01

    Understanding the dynamics of higher-dimensional quantum systems embedded in a complex environment remains a significant theoretical challenge. While several approaches yielding numerically converged solutions exist, these are computationally expensive and often provide only limited physical insight. Here we address the question: when do more intuitive and simpler-to-compute second-order perturbative approaches provide adequate accuracy? We develop a simple analytical criterion and verify its validity for the case of the much-studied FMO dynamics as well as the canonical spin-boson model. PMID:27335176

  9. Efficient computational methods for accurately predicting reduction potentials of organic molecules.

    PubMed

    Speelman, Amy L; Gillmore, Jason G

    2008-06-26

    A simple computational approach for predicting ground-state reduction potentials based upon gas phase geometry optimizations at a moderate level of density functional theory followed by single-point energy calculations at higher levels of theory in the gas phase or with polarizable continuum solvent models is described. Energies of the gas phase optimized geometries of the S0 and one-electron-reduced D0 states of 35 planar aromatic organic molecules spanning three distinct families of organic photooxidants are computed in the gas phase as well as well in implicit solvent with IPCM and CPCM solvent models. Correlation of the D0 - S0 energy difference (essentially an electron affinity) with experimental reduction potentials from the literature (in acetonitrile vs SCE) within a single family, or across families when solvent models are used, yield correlations with r(2) values in excess of 0.97 and residuals of about 100 mV or less, without resorting to computationally expensive vibrational calculations or thermodynamic cycles.

  10. Toward exascale computing through neuromorphic approaches.

    SciTech Connect

    James, Conrad D.

    2010-09-01

    While individual neurons function at relatively low firing rates, naturally-occurring nervous systems not only surpass manmade systems in computing power, but accomplish this feat using relatively little energy. It is asserted that the next major breakthrough in computing power will be achieved through application of neuromorphic approaches that mimic the mechanisms by which neural systems integrate and store massive quantities of data for real-time decision making. The proposed LDRD provides a conceptual foundation for SNL to make unique advances toward exascale computing. First, a team consisting of experts from the HPC, MESA, cognitive and biological sciences and nanotechnology domains will be coordinated to conduct an exercise with the outcome being a concept for applying neuromorphic computing to achieve exascale computing. It is anticipated that this concept will involve innovative extension and integration of SNL capabilities in MicroFab, material sciences, high-performance computing, and modeling and simulation of neural processes/systems.

  11. Hierarchical Liouville-space approach for accurate and universal characterization of quantum impurity systems.

    PubMed

    Li, ZhenHua; Tong, NingHua; Zheng, Xiao; Hou, Dong; Wei, JianHua; Hu, Jie; Yan, YiJing

    2012-12-28

    A hierarchical equations of motion based numerical approach is developed for accurate and efficient evaluation of dynamical observables of strongly correlated quantum impurity systems. This approach is capable of describing quantitatively Kondo resonance and Fermi-liquid characteristics, achieving the accuracy of the latest high-level numerical renormalization group approach, as demonstrated on single-impurity Anderson model systems. Its application to a two-impurity Anderson model results in differential conductance versus external bias, which correctly reproduces the continuous transition from Kondo states of individual impurity to singlet spin states formed between two impurities. The outstanding performance on characterizing both equilibrium and nonequilibrium properties of quantum impurity systems makes the hierarchical equations of motion approach potentially useful for addressing strongly correlated lattice systems in the framework of dynamical mean-field theory.

  12. Accurate optimization of amino acid form factors for computing small-angle X-ray scattering intensity of atomistic protein structures

    SciTech Connect

    Tong, Dudu; Yang, Sichun; Lu, Lanyuan

    2016-06-20

    Structure modellingviasmall-angle X-ray scattering (SAXS) data generally requires intensive computations of scattering intensity from any given biomolecular structure, where the accurate evaluation of SAXS profiles using coarse-grained (CG) methods is vital to improve computational efficiency. To date, most CG SAXS computing methods have been based on a single-bead-per-residue approximation but have neglected structural correlations between amino acids. To improve the accuracy of scattering calculations, accurate CG form factors of amino acids are now derived using a rigorous optimization strategy, termed electron-density matching (EDM), to best fit electron-density distributions of protein structures. This EDM method is compared with and tested against other CG SAXS computing methods, and the resulting CG SAXS profiles from EDM agree better with all-atom theoretical SAXS data. By including the protein hydration shell represented by explicit CG water molecules and the correction of protein excluded volume, the developed CG form factors also reproduce the selected experimental SAXS profiles with very small deviations. Taken together, these EDM-derived CG form factors present an accurate and efficient computational approach for SAXS computing, especially when higher molecular details (represented by theqrange of the SAXS data) become necessary for effective structure modelling.

  13. Accurate computation of surface stresses and forces with immersed boundary methods

    NASA Astrophysics Data System (ADS)

    Goza, Andres; Liska, Sebastian; Morley, Benjamin; Colonius, Tim

    2016-09-01

    Many immersed boundary methods solve for surface stresses that impose the velocity boundary conditions on an immersed body. These surface stresses may contain spurious oscillations that make them ill-suited for representing the physical surface stresses on the body. Moreover, these inaccurate stresses often lead to unphysical oscillations in the history of integrated surface forces such as the coefficient of lift. While the errors in the surface stresses and forces do not necessarily affect the convergence of the velocity field, it is desirable, especially in fluid-structure interaction problems, to obtain smooth and convergent stress distributions on the surface. To this end, we show that the equation for the surface stresses is an integral equation of the first kind whose ill-posedness is the source of spurious oscillations in the stresses. We also demonstrate that for sufficiently smooth delta functions, the oscillations may be filtered out to obtain physically accurate surface stresses. The filtering is applied as a post-processing procedure, so that the convergence of the velocity field is unaffected. We demonstrate the efficacy of the method by computing stresses and forces that converge to the physical stresses and forces for several test problems.

  14. Gasdynamic approach to small plumes computation

    NASA Astrophysics Data System (ADS)

    Genkin, L.; Baer, M.; Falcovitz, J.

    1993-07-01

    The semi-inverse marching characteristics scheme SIMA was extended to treat rotational flows; it is applied to computation of free plumes, starting out from non-uniform nozzle exit flow that reflects substantial viscous effects. For lack of measurements of exit flow in small nozzles, the exit plane flow is approximated by introducing a Power Law Interpolation (PLI) between the exit plane center and lip values. Exit plane flow variables thus approximated, are Mach number, pressure, flow angle and stagnation temperature. This choice is guided by gasdynamic considerations of exhaust flow from small nozzles into vacuum. The PLI is adjusted so as to obtain a match between computations and measurements at intermediate range from the nozzle. Computed plumes were found to be in good agreement with five different sets of small plume experiments. Comparative computations were performed using two alternate methods: the Boynton-Simons point-source approximation, and SIMA computation that started out from a uniform exit flow. It is demonstrated that for small nozzles having an exit flow dominated by viscous effects, the combined SIMA/PLI computational method is reasonably accurate and is dearly superior to either of the two alternate methods.

  15. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    SciTech Connect

    Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang E-mail: jing.xiong@siat.ac.cn; Hu, Ying; Xiong, Jing E-mail: jing.xiong@siat.ac.cn; Zhang, Jianwei

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  16. Accurate vibrational spectra via molecular tailoring approach: a case study of water clusters at MP2 level.

    PubMed

    Sahu, Nityananda; Gadre, Shridhar R

    2015-01-07

    In spite of the recent advents in parallel algorithms and computer hardware, high-level calculation of vibrational spectra of large molecules is still an uphill task. To overcome this, significant effort has been devoted to the development of new algorithms based on fragmentation methods. The present work provides the details of an efficient and accurate procedure for computing the vibrational spectra of large clusters employing molecular tailoring approach (MTA). The errors in the Hessian matrix elements and dipole derivatives arising due to the approximation nature of MTA are reduced by grafting the corrections from a smaller basis set. The algorithm has been tested out for obtaining vibrational spectra of neutral and charged water clusters at Møller-Plesset second order level of theory, and benchmarking them against the respective full calculation (FC) and/or experimental results. For (H2O)16 clusters, the estimated vibrational frequencies are found to differ by a maximum of 2 cm(-1) with reference to the corresponding FC values. Unlike the FC, the MTA-based calculations including grafting procedure can be performed on a limited hardware, yet take a fraction of the FC time. The present methodology, thus, opens a possibility of the accurate estimation of the vibrational spectra of large molecular systems, which is otherwise impossible or formidable.

  17. Accurate vibrational spectra via molecular tailoring approach: A case study of water clusters at MP2 level

    NASA Astrophysics Data System (ADS)

    Sahu, Nityananda; Gadre, Shridhar R.

    2015-01-01

    In spite of the recent advents in parallel algorithms and computer hardware, high-level calculation of vibrational spectra of large molecules is still an uphill task. To overcome this, significant effort has been devoted to the development of new algorithms based on fragmentation methods. The present work provides the details of an efficient and accurate procedure for computing the vibrational spectra of large clusters employing molecular tailoring approach (MTA). The errors in the Hessian matrix elements and dipole derivatives arising due to the approximation nature of MTA are reduced by grafting the corrections from a smaller basis set. The algorithm has been tested out for obtaining vibrational spectra of neutral and charged water clusters at Møller-Plesset second order level of theory, and benchmarking them against the respective full calculation (FC) and/or experimental results. For (H2O)16 clusters, the estimated vibrational frequencies are found to differ by a maximum of 2 cm-1 with reference to the corresponding FC values. Unlike the FC, the MTA-based calculations including grafting procedure can be performed on a limited hardware, yet take a fraction of the FC time. The present methodology, thus, opens a possibility of the accurate estimation of the vibrational spectra of large molecular systems, which is otherwise impossible or formidable.

  18. The extended Koopmans' theorem for orbital-optimized methods: accurate computation of ionization potentials.

    PubMed

    Bozkaya, Uğur

    2013-10-21

    The extended Koopmans' theorem (EKT) provides a straightforward way to compute ionization potentials (IPs) from any level of theory, in principle. However, for non-variational methods, such as Møller-Plesset perturbation and coupled-cluster theories, the EKT computations can only be performed as by-products of analytic gradients as the relaxed generalized Fock matrix (GFM) and one- and two-particle density matrices (OPDM and TPDM, respectively) are required [J. Cioslowski, P. Piskorz, and G. Liu, J. Chem. Phys. 107, 6804 (1997)]. However, for the orbital-optimized methods both the GFM and OPDM are readily available and symmetric, as opposed to the standard post Hartree-Fock (HF) methods. Further, the orbital optimized methods solve the N-representability problem, which may arise when the relaxed particle density matrices are employed for the standard methods, by disregarding the orbital Z-vector contributions for the OPDM. Moreover, for challenging chemical systems, where spin or spatial symmetry-breaking problems are observed, the abnormal orbital response contributions arising from the numerical instabilities in the HF molecular orbital Hessian can be avoided by the orbital-optimization. Hence, it appears that the orbital-optimized methods are the most natural choice for the study of the EKT. In this research, the EKT for the orbital-optimized methods, such as orbital-optimized second- and third-order Møller-Plesset perturbation [U. Bozkaya, J. Chem. Phys. 135, 224103 (2011)] and coupled-electron pair theories [OCEPA(0)] [U. Bozkaya and C. D. Sherrill, J. Chem. Phys. 139, 054104 (2013)], are presented. The presented methods are applied to IPs of the second- and third-row atoms, and closed- and open-shell molecules. Performances of the orbital-optimized methods are compared with those of the counterpart standard methods. Especially, results of the OCEPA(0) method (with the aug-cc-pVTZ basis set) for the lowest IPs of the considered atoms and closed

  19. Stable, accurate and efficient computation of normal modes for horizontal stratified models

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Chen, Xiaofei

    2016-08-01

    We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of `family of secular functions' that we herein call `adaptive mode observers' is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of `turning point', our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.

  20. A Creative Arts Approach to Computer Programming.

    ERIC Educational Resources Information Center

    Greenberg, Gary

    1991-01-01

    Discusses "Object LOGO," a symbolic computer programing language for use in the creative arts. Describes the use of the program in approaching arts projects from textual, graphic, and musical perspectives. Suggests that use of the program can promote development of creative skills and humanities learning in general. (SG)

  1. Accurate electronic-structure description of Mn complexes: a GGA+U approach

    NASA Astrophysics Data System (ADS)

    Li, Elise Y.; Kulik, Heather; Marzari, Nicola

    2008-03-01

    Conventional density-functional approach often fail in offering an accurate description of the spin-resolved energetics in transition metals complexes. We will focus here on Mn complexes, where many aspects of the molecular structure and the reaction mechanisms are still unresolved - most notably in the oxygen-evolving complex (OEC) of photosystem II and the manganese catalase (MC). We apply a self-consistent GGA + U approach [1], originally designed within the DFT framework for the treatment of strongly correlated materials, to describe the geometry, the electronic and the magnetic properties of various manganese oxide complexes, finding very good agreement with higher-order ab-initio calculations. In particular, the different oxidation states of dinuclear systems containing the [Mn2O2]^n+ (n= 2, 3, 4) core are investigated, in order to mimic the basic face unit of the OEC complex. [1]. H. J. Kulik, M. Cococcioni, D. A. Scherlis, N. Marzari, Phys. Rev. Lett., 2006, 97, 103001

  2. Computational Chemical Imaging for Cardiovascular Pathology: Chemical Microscopic Imaging Accurately Determines Cardiac Transplant Rejection

    PubMed Central

    Tiwari, Saumya; Reddy, Vijaya B.; Bhargava, Rohit; Raman, Jaishankar

    2015-01-01

    Rejection is a common problem after cardiac transplants leading to significant number of adverse events and deaths, particularly in the first year of transplantation. The gold standard to identify rejection is endomyocardial biopsy. This technique is complex, cumbersome and requires a lot of expertise in the correct interpretation of stained biopsy sections. Traditional histopathology cannot be used actively or quickly during cardiac interventions or surgery. Our objective was to develop a stain-less approach using an emerging technology, Fourier transform infrared (FT-IR) spectroscopic imaging to identify different components of cardiac tissue by their chemical and molecular basis aided by computer recognition, rather than by visual examination using optical microscopy. We studied this technique in assessment of cardiac transplant rejection to evaluate efficacy in an example of complex cardiovascular pathology. We recorded data from human cardiac transplant patients’ biopsies, used a Bayesian classification protocol and developed a visualization scheme to observe chemical differences without the need of stains or human supervision. Using receiver operating characteristic curves, we observed probabilities of detection greater than 95% for four out of five histological classes at 10% probability of false alarm at the cellular level while correctly identifying samples with the hallmarks of the immune response in all cases. The efficacy of manual examination can be significantly increased by observing the inherent biochemical changes in tissues, which enables us to achieve greater diagnostic confidence in an automated, label-free manner. We developed a computational pathology system that gives high contrast images and seems superior to traditional staining procedures. This study is a prelude to the development of real time in situ imaging systems, which can assist interventionists and surgeons actively during procedures. PMID:25932912

  3. Accurate potential energy surfaces with a DFT+U(R) approach.

    PubMed

    Kulik, Heather J; Marzari, Nicola

    2011-11-21

    We introduce an improvement to the Hubbard U augmented density functional approach known as DFT+U that incorporates variations in the value of self-consistently calculated, linear-response U with changes in geometry. This approach overcomes the one major shortcoming of previous DFT+U studies, i.e., the use of an averaged Hubbard U when comparing energies for different points along a potential energy surface is no longer required. While DFT+U is quite successful at providing accurate descriptions of localized electrons (e.g., d or f) by correcting self-interaction errors of standard exchange correlation functionals, we show several diatomic molecule examples where this position-dependent DFT+U(R) provides a significant two- to four-fold improvement over DFT+U predictions, when compared to accurate correlated quantum chemistry and experimental references. DFT+U(R) reduces errors in binding energies, frequencies, and equilibrium bond lengths by applying the linear-response, position-dependent U(R) at each configuration considered. This extension is most relevant where variations in U are large across the points being compared, as is the case with covalent diatomic molecules such as transition-metal oxides. We thus provide a tool for deciding whether a standard DFT+U approach is sufficient by determining the strength of the dependence of U on changes in coordinates. We also apply this approach to larger systems with greater degrees of freedom and demonstrate how DFT+U(R) may be applied automatically in relaxations, transition-state finding methods, and dynamics.

  4. Procedure for computer-controlled milling of accurate surfaces of revolution for millimeter and far-infrared mirrors

    NASA Technical Reports Server (NTRS)

    Emmons, Louisa; De Zafra, Robert

    1991-01-01

    A simple method for milling accurate off-axis parabolic mirrors with a computer-controlled milling machine is discussed. For machines with a built-in circle-cutting routine, an exact paraboloid can be milled with few computer commands and without the use of the spherical or linear approximations. The proposed method can be adapted easily to cut off-axis sections of elliptical or spherical mirrors.

  5. Accurate ranking of differentially expressed genes by a distribution-free shrinkage approach.

    PubMed

    Opgen-Rhein, Rainer; Strimmer, Korbinian

    2007-01-01

    High-dimensional case-control analysis is encountered in many different settings in genomics. In order to rank genes accordingly, many different scores have been proposed, ranging from ad hoc modifications of the ordinary t statistic to complicated hierarchical Bayesian models. Here, we introduce the "shrinkage t" statistic that is based on a novel and model-free shrinkage estimate of the variance vector across genes. This is derived in a quasi-empirical Bayes setting. The new rank score is fully automatic and requires no specification of parameters or distributions. It is computationally inexpensive and can be written analytically in closed form. Using a series of synthetic and three real expression data we studied the quality of gene rankings produced by the "shrinkage t" statistic. The new score consistently leads to highly accurate rankings for the complete range of investigated data sets and all considered scenarios for across-gene variance structures.

  6. Accurate modeling of size and strain broadening in the Rietveld refinement: The {open_quotes}double-Voigt{close_quotes} approach

    SciTech Connect

    Balzar, D.; Ledbetter, H.

    1995-12-31

    In the {open_quotes}double-Voigt{close_quotes} approach, an exact Voigt function describes both size- and strain-broadened profiles. The lattice strain is defined in terms of physically credible mean-square strain averaged over a distance in the diffracting domains. Analysis of Fourier coefficients in a harmonic approximation for strain coefficients leads to the Warren-Averbach method for the separation of size and strain contributions to diffraction line broadening. The model is introduced in the Rietveld refinement program in the following way: Line widths are modeled with only four parameters in the isotropic case. Varied parameters are both surface- and volume-weighted domain sizes and root-mean-square strains averaged over two distances. Refined parameters determine the physically broadened Voigt line profile. Instrumental Voigt line profile parameters are added to obtain the observed (Voigt) line profile. To speed computation, the corresponding pseudo-Voigt function is calculated and used as a fitting function in refinement. This approach allows for both fast computer code and accurate modeling in terms of physically identifiable parameters.

  7. DEM sourcing guidelines for computing 1 Eö accurate terrain corrections for airborne gravity gradiometry

    NASA Astrophysics Data System (ADS)

    Annecchione, Maria; Hatch, David; Hefford, Shane W.

    2017-01-01

    In this paper we investigate digital elevation model (DEM) sourcing requirements to compute gravity gradiometry terrain corrections accurate to 1 Eötvös (Eö) at observation heights of 80 m or more above ground. Such survey heights are typical in fixed-wing airborne surveying for resource exploration where the maximum signal-to-noise ratio is sought. We consider the accuracy of terrain corrections relevant for recent commercial airborne gravity gradiometry systems operating at the 10 Eö noise level and for future systems with a target noise level of 1 Eö. We focus on the requirements for the vertical gradient of the vertical component of gravity (Gdd) because this element of the gradient tensor is most commonly interpreted qualitatively and quantitatively. Terrain correction accuracy depends on the bare-earth DEM accuracy and spatial resolution. The bare-earth DEM accuracy and spatial resolution depends on its source. Two possible sources are considered: airborne LiDAR and Shuttle Radar Topography Mission (SRTM). The accuracy of an SRTM DEM is affected by vegetation height. The SRTM footprint is also larger and the DEM resolution is thus lower. However, resolution requirements relax as relief decreases. Publicly available LiDAR data and 1 arc-second and 3 arc-second SRTM data were selected over four study areas representing end member cases of vegetation cover and relief. The four study areas are presented as reference material for processing airborne gravity gradiometry data at the 1 Eö noise level with 50 m spatial resolution. From this investigation we find that to achieve 1 Eö accuracy in the terrain correction at 80 m height airborne LiDAR data are required even when terrain relief is a few tens of meters and the vegetation is sparse. However, as satellite ranging technologies progress bare-earth DEMs of sufficient accuracy and resolution may be sourced at lesser cost. We found that a bare-earth DEM of 10 m resolution and 2 m accuracy are sufficient for

  8. Absolute Hounsfield unit measurement on noncontrast computed tomography cannot accurately predict struvite stone composition.

    PubMed

    Marchini, Giovanni Scala; Gebreselassie, Surafel; Liu, Xiaobo; Pynadath, Cindy; Snyder, Grace; Monga, Manoj

    2013-02-01

    The purpose of our study was to determine, in vivo, whether single-energy noncontrast computed tomography (NCCT) can accurately predict the presence/percentage of struvite stone composition. We retrospectively searched for all patients with struvite components on stone composition analysis between January 2008 and March 2012. Inclusion criteria were NCCT prior to stone analysis and stone size ≥4 mm. A single urologist, blinded to stone composition, reviewed all NCCT to acquire stone location, dimensions, and Hounsfield unit (HU). HU density (HUD) was calculated by dividing mean HU by the stone's largest transverse diameter. Stone analysis was performed via Fourier transform infrared spectrometry. Independent sample Student's t-test and analysis of variance (ANOVA) were used to compare HU/HUD among groups. Spearman's correlation test was used to determine the correlation between HU and stone size and also HU/HUD to % of each component within the stone. Significance was considered if p<0.05. Fourty-four patients met the inclusion criteria. Struvite was the most prevalent component with mean percentage of 50.1%±17.7%. Mean HU and HUD were 820.2±357.9 and 67.5±54.9, respectively. Struvite component analysis revealed a nonsignificant positive correlation with HU (R=0.017; p=0.912) and negative with HUD (R=-0.20; p=0.898). Overall, 3 (6.8%) had <20% of struvite component; 11 (25%), 25 (56.8%), and 5 (11.4%) had 21% to 40%, 41% to 60%, and 61% to 80% of struvite, respectively. ANOVA revealed no difference among groups regarding HU (p=0.68) and HUD (p=0.37), with important overlaps. When comparing pure struvite stones (n=5) with other miscellaneous stones (n=39), no difference was found for HU (p=0.09) but HUD was significantly lower for pure stones (27.9±23.6 v 72.5±55.9, respectively; p=0.006). Again, significant overlaps were seen. Pure struvite stones have significantly lower HUD than mixed struvite stones, but overlap exists. A low HUD may increase the

  9. Computational Approach for Developing Blood Pump

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan

    2002-01-01

    This viewgraph presentation provides an overview of the computational approach to developing a ventricular assist device (VAD) which utilizes NASA aerospace technology. The VAD is used as a temporary support to sick ventricles for those who suffer from late stage congestive heart failure (CHF). The need for donor hearts is much greater than their availability, and the VAD is seen as a bridge-to-transplant. The computational issues confronting the design of a more advanced, reliable VAD include the modelling of viscous incompressible flow. A computational approach provides the possibility of quantifying the flow characteristics, which is especially valuable for analyzing compact design with highly sensitive operating conditions. Computational fluid dynamics (CFD) and rocket engine technology has been applied to modify the design of a VAD which enabled human transplantation. The computing requirement for this project is still large, however, and the unsteady analysis of the entire system from natural heart to aorta involves several hundred revolutions of the impeller. Further study is needed to assess the impact of mechanical VADs on the human body

  10. Enabling Computational Technologies for the Accurate Prediction/Description of Molecular Interactions in Condensed Phases

    DTIC Science & Technology

    2014-10-08

    D. G., New Orleans, LA, April 9, 2013. 223rd Electrochemical Society Meeting, Continuum Solvation Models for Computational Electrochemistry ...Donald G. Truhlar. Computational electrochemistry : prediction of liquid-phase reduction potentials, Physical Chemistry Chemical Physics, (08 2014

  11. Computational approach for probing the flow through artificial heart devices.

    PubMed

    Kiris, C; Kwak, D; Rogers, S; Chang, I D

    1997-11-01

    Computational fluid dynamics (CFD) has become an indispensable part of aerospace research and design. The solution procedure for incompressible Navier-Stokes equations can be used for biofluid mechanics research. The computational approach provides detailed knowledge of the flowfield complementary to that obtained by experimental measurements. This paper illustrates the extension of CFD techniques to artificial heart flow simulation. Unsteady incompressible Navier-Stokes equations written in three-dimensional generalized curvilinear coordinates are solved iteratively at each physical time step until the incompressibility condition is satisfied. The solution method is based on the pseudocompressibility approach. It uses an implicit upwind-differencing scheme together with the Gauss-Seidel line-relaxation method. The efficiency and robustness of the time-accurate formulation of the numerical algorithm are tested by computing the flow through model geometries. A channel flow with a moving indentation is computed and validated by experimental measurements and other numerical solutions. In order to handle the geometric complexity and the moving boundary problems, a zonal method and an overlapped grid embedding scheme are employed, respectively. Steady-state solutions for the flow through a tilting-disk heart valve are compared with experimental measurements. Good agreement is obtained. Aided by experimental data, the flow through an entire Penn State artificial heart model is computed.

  12. A hierarchical approach to accurate predictions of macroscopic thermodynamic behavior from quantum mechanics and molecular simulations

    NASA Astrophysics Data System (ADS)

    Garrison, Stephen L.

    2005-07-01

    The combination of molecular simulations and potentials obtained from quantum chemistry is shown to be able to provide reasonably accurate thermodynamic property predictions. Gibbs ensemble Monte Carlo simulations are used to understand the effects of small perturbations to various regions of the model Lennard-Jones 12-6 potential. However, when the phase behavior and second virial coefficient are scaled by the critical properties calculated for each potential, the results obey a corresponding states relation suggesting a non-uniqueness problem for interaction potentials fit to experimental phase behavior. Several variations of a procedure collectively referred to as quantum mechanical Hybrid Methods for Interaction Energies (HM-IE) are developed and used to accurately estimate interaction energies from CCSD(T) calculations with a large basis set in a computationally efficient manner for the neon-neon, acetylene-acetylene, and nitrogen-benzene systems. Using these results and methods, an ab initio, pairwise-additive, site-site potential for acetylene is determined and then improved using results from molecular simulations using this initial potential. The initial simulation results also indicate that a limited range of energies important for accurate phase behavior predictions. Second virial coefficients calculated from the improved potential indicate that one set of experimental data in the literature is likely erroneous. This prescription is then applied to methanethiol. Difficulties in modeling the effects of the lone pair electrons suggest that charges on the lone pair sites negatively impact the ability of the intermolecular potential to describe certain orientations, but that the lone pair sites may be necessary to reasonably duplicate the interaction energies for several orientations. Two possible methods for incorporating the effects of three-body interactions into simulations within the pairwise-additivity formulation are also developed. A low density

  13. Effective and accurate approach for modeling of commensurate–incommensurate transition in krypton monolayer on graphite

    SciTech Connect

    Ustinov, E. A.

    2014-10-07

    Commensurate–incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs–Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton–graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton–carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas–solid and solid–solid system.

  14. Effective and accurate approach for modeling of commensurate-incommensurate transition in krypton monolayer on graphite.

    PubMed

    Ustinov, E A

    2014-10-07

    Commensurate-incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs-Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton-graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton-carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas-solid and solid-solid system.

  15. Accurate radiocarbon age estimation using "early" measurements: a new approach to reconstructing the Paleolithic absolute chronology

    NASA Astrophysics Data System (ADS)

    Omori, Takayuki; Sano, Katsuhiro; Yoneda, Minoru

    2014-05-01

    This paper presents new correction approaches for "early" radiocarbon ages to reconstruct the Paleolithic absolute chronology. In order to discuss time-space distribution about the replacement of archaic humans, including Neanderthals in Europe, by the modern humans, a massive data, which covers a wide-area, would be needed. Today, some radiocarbon databases focused on the Paleolithic have been published and used for chronological studies. From a viewpoint of current analytical technology, however, the any database have unreliable results that make interpretation of radiocarbon dates difficult. Most of these unreliable ages had been published in the early days of radiocarbon analysis. In recent years, new analytical methods to determine highly-accurate dates have been developed. Ultrafiltration and ABOx-SC methods, as new sample pretreatments for bone and charcoal respectively, have attracted attention because they could remove imperceptible contaminates and derive reliable accurately ages. In order to evaluate the reliability of "early" data, we investigated the differences and variabilities of radiocarbon ages on different pretreatments, and attempted to develop correction functions for the assessment of the reliability. It can be expected that reliability of the corrected age is increased and the age applied to chronological research together with recent ages. Here, we introduce the methodological frameworks and archaeological applications.

  16. A Maximum-Entropy approach for accurate document annotation in the biomedical domain

    PubMed Central

    2012-01-01

    The increasing number of scientific literature on the Web and the absence of efficient tools used for classifying and searching the documents are the two most important factors that influence the speed of the search and the quality of the results. Previous studies have shown that the usage of ontologies makes it possible to process document and query information at the semantic level, which greatly improves the search for the relevant information and makes one step further towards the Semantic Web. A fundamental step in these approaches is the annotation of documents with ontology concepts, which can also be seen as a classification task. In this paper we address this issue for the biomedical domain and present a new automated and robust method, based on a Maximum Entropy approach, for annotating biomedical literature documents with terms from the Medical Subject Headings (MeSH). The experimental evaluation shows that the suggested Maximum Entropy approach for annotating biomedical documents with MeSH terms is highly accurate, robust to the ambiguity of terms, and can provide very good performance even when a very small number of training documents is used. More precisely, we show that the proposed algorithm obtained an average F-measure of 92.4% (precision 99.41%, recall 86.77%) for the full range of the explored terms (4,078 MeSH terms), and that the algorithm’s performance is resilient to terms’ ambiguity, achieving an average F-measure of 92.42% (precision 99.32%, recall 86.87%) in the explored MeSH terms which were found to be ambiguous according to the Unified Medical Language System (UMLS) thesaurus. Finally, we compared the results of the suggested methodology with a Naive Bayes and a Decision Trees classification approach, and we show that the Maximum Entropy based approach performed with higher F-Measure in both ambiguous and monosemous MeSH terms. PMID:22541593

  17. A Maximum-Entropy approach for accurate document annotation in the biomedical domain.

    PubMed

    Tsatsaronis, George; Macari, Natalia; Torge, Sunna; Dietze, Heiko; Schroeder, Michael

    2012-04-24

    The increasing number of scientific literature on the Web and the absence of efficient tools used for classifying and searching the documents are the two most important factors that influence the speed of the search and the quality of the results. Previous studies have shown that the usage of ontologies makes it possible to process document and query information at the semantic level, which greatly improves the search for the relevant information and makes one step further towards the Semantic Web. A fundamental step in these approaches is the annotation of documents with ontology concepts, which can also be seen as a classification task. In this paper we address this issue for the biomedical domain and present a new automated and robust method, based on a Maximum Entropy approach, for annotating biomedical literature documents with terms from the Medical Subject Headings (MeSH).The experimental evaluation shows that the suggested Maximum Entropy approach for annotating biomedical documents with MeSH terms is highly accurate, robust to the ambiguity of terms, and can provide very good performance even when a very small number of training documents is used. More precisely, we show that the proposed algorithm obtained an average F-measure of 92.4% (precision 99.41%, recall 86.77%) for the full range of the explored terms (4,078 MeSH terms), and that the algorithm's performance is resilient to terms' ambiguity, achieving an average F-measure of 92.42% (precision 99.32%, recall 86.87%) in the explored MeSH terms which were found to be ambiguous according to the Unified Medical Language System (UMLS) thesaurus. Finally, we compared the results of the suggested methodology with a Naive Bayes and a Decision Trees classification approach, and we show that the Maximum Entropy based approach performed with higher F-Measure in both ambiguous and monosemous MeSH terms.

  18. A defect corrected finite element approach for the accurate evaluation of magnetic fields on unstructured grids

    NASA Astrophysics Data System (ADS)

    Römer, Ulrich; Schöps, Sebastian; De Gersem, Herbert

    2017-04-01

    In electromagnetic simulations of magnets and machines, one is often interested in a highly accurate and local evaluation of the magnetic field uniformity. Based on local post-processing of the solution, a defect correction scheme is proposed as an easy to realize alternative to higher order finite element or hybrid approaches. Radial basis functions (RBFs) are key for the generality of the method, which in particular can handle unstructured grids. Also, contrary to conventional finite element basis functions, higher derivatives of the solution can be evaluated, as required, e.g., for deflection magnets. Defect correction is applied to obtain a solution with improved accuracy and adjoint techniques are used to estimate the remaining error for a specific quantity of interest. Significantly improved (local) convergence orders are obtained. The scheme is also applied to the simulation of a Stern-Gerlach magnet currently in operation.

  19. Computational approach to compact Riemann surfaces

    NASA Astrophysics Data System (ADS)

    Frauendiener, Jörg; Klein, Christian

    2017-01-01

    A purely numerical approach to compact Riemann surfaces starting from plane algebraic curves is presented. The critical points of the algebraic curve are computed via a two-dimensional Newton iteration. The starting values for this iteration are obtained from the resultants with respect to both coordinates of the algebraic curve and a suitable pairing of their zeros. A set of generators of the fundamental group for the complement of these critical points in the complex plane is constructed from circles around these points and connecting lines obtained from a minimal spanning tree. The monodromies are computed by solving the defining equation of the algebraic curve on collocation points along these contours and by analytically continuing the roots. The collocation points are chosen to correspond to Chebychev collocation points for an ensuing Clenshaw-Curtis integration of the holomorphic differentials which gives the periods of the Riemann surface with spectral accuracy. At the singularities of the algebraic curve, Puiseux expansions computed by contour integration on the circles around the singularities are used to identify the holomorphic differentials. The Abel map is also computed with the Clenshaw-Curtis algorithm and contour integrals. As an application of the code, solutions to the Kadomtsev-Petviashvili equation are computed on non-hyperelliptic Riemann surfaces.

  20. Computational Approaches to Nucleic Acid Origami.

    PubMed

    Jabbari, Hosna; Aminpour, Maral; Montemagno, Carlo

    2015-10-12

    Recent advances in experimental DNA origami have dramatically expanded the horizon of DNA nanotechnology. Complex 3D suprastructures have been designed and developed using DNA origami with applications in biomaterial science, nanomedicine, nanorobotics, and molecular computation. Ribonucleic acid (RNA) origami has recently been realized as a new approach. Similar to DNA, RNA molecules can be designed to form complex 3D structures through complementary base pairings. RNA origami structures are, however, more compact and more thermodynamically stable due to RNA's non-canonical base pairing and tertiary interactions. With all these advantages, the development of RNA origami lags behind DNA origami by a large gap. Furthermore, although computational methods have proven to be effective in designing DNA and RNA origami structures and in their evaluation, advances in computational nucleic acid origami is even more limited. In this paper, we review major milestones in experimental and computational DNA and RNA origami and present current challenges in these fields. We believe collaboration between experimental nanotechnologists and computer scientists are critical for advancing these new research paradigms.

  1. Application of a polynomial spline in higher-order accurate viscous-flow computations

    NASA Technical Reports Server (NTRS)

    Turner, M. G.; Keith, J. S.; Ghia, K. N.; Ghia, U.

    1982-01-01

    A quartic spline, S(4,2), is proposed which overcomes some of the difficulties associated with the use of splines S(5,3) and S(3,1) and provides fourth-order accurate results with relatively few grid points. The accuracy of spline S(4,2) is comparable to or better than that of the fourth-order box scheme and the compact differencing scheme. The use of spline S(4,2) is suggested as a possible way of obtaining fourth-order accurate solutions to Navier-Stokes equations.

  2. Accurate computation of the Hotelling template for SKE/BKE detection tasks

    NASA Astrophysics Data System (ADS)

    Sidky, Emil Y.; LaRoque, Samuel J.; Pan, Xiaochuan

    2008-03-01

    An accurate method for evaluating the Hotelling observer for large linear systems is generalized. The method involves solving an m-channel channelized Hotelling observer where the channels are refined in an iterative manner. Challenging numerical examples are shown in order to illustrate the method and give a sense of the convergence rates as a function of m.

  3. Accurate calculation of conformational free energy differences in explicit water: the confinement-solvation free energy approach.

    PubMed

    Esque, Jeremy; Cecchini, Marco

    2015-04-23

    The calculation of the free energy of conformation is key to understanding the function of biomolecules and has attracted significant interest in recent years. Here, we present an improvement of the confinement method that was designed for use in the context of explicit solvent MD simulations. The development involves an additional step in which the solvation free energy of the harmonically restrained conformers is accurately determined by multistage free energy perturbation simulations. As a test-case application, the newly introduced confinement/solvation free energy (CSF) approach was used to compute differences in free energy between conformers of the alanine dipeptide in explicit water. The results are in excellent agreement with reference calculations based on both converged molecular dynamics and umbrella sampling. To illustrate the general applicability of the method, conformational equilibria of met-enkephalin (5 aa) and deca-alanine (10 aa) in solution were also analyzed. In both cases, smoothly converged free-energy results were obtained in agreement with equilibrium sampling or literature calculations. These results demonstrate that the CSF method may provide conformational free-energy differences of biomolecules with small statistical errors (below 0.5 kcal/mol) and at a moderate computational cost even with a full representation of the solvent.

  4. A Novel PCR-Based Approach for Accurate Identification of Vibrio parahaemolyticus

    PubMed Central

    Li, Ruichao; Chiou, Jiachi; Chan, Edward Wai-Chi; Chen, Sheng

    2016-01-01

    A PCR-based assay was developed for more accurate identification of Vibrio parahaemolyticus through targeting the blaCARB-17 like element, an intrinsic β-lactamase gene that may also be regarded as a novel species-specific genetic marker of this organism. Homologous analysis showed that blaCARB-17 like genes were more conservative than the tlh, toxR and atpA genes, the genetic markers commonly used as detection targets in identification of V. parahaemolyticus. Our data showed that this blaCARB-17-specific PCR-based detection approach consistently achieved 100% specificity, whereas PCR targeting the tlh and atpA genes occasionally produced false positive results. Furthermore, a positive result of this test is consistently associated with an intrinsic ampicillin resistance phenotype of the test organism, presumably conferred by the products of blaCARB-17 like genes. We envision that combined analysis of the unique genetic and phenotypic characteristics conferred by blaCARB-17 shall further enhance the detection specificity of this novel yet easy-to-use detection approach to a level superior to the conventional methods used in V. parahaemolyticus detection and identification. PMID:26858713

  5. An Optimized Fluorescence-Based Bidimensional Immunoproteomic Approach for Accurate Screening of Autoantibodies

    PubMed Central

    Launay, David; Sobanski, Vincent; Dussart, Patricia; Chafey, Philippe; Broussard, Cédric; Duban-Deweer, Sophie; Vermersch, Patrick; Prin, Lionel; Lefranc, Didier

    2015-01-01

    Serological proteome analysis (SERPA) combines classical proteomic technology with effective separation of cellular protein extracts on two-dimensional gel electrophoresis, western blotting, and identification of the antigenic spot of interest by mass spectrometry. A critical point is related to the antigenic target characterization by mass spectrometry, which depends on the accuracy of the matching of antigenic reactivities on the protein spots during the 2D immunoproteomic procedures. The superimposition, based essentially on visual criteria of antigenic and protein spots, remains the major limitation of SERPA. The introduction of fluorescent dyes in proteomic strategies, commonly known as 2D-DIGE (differential in-gel electrophoresis), has boosted the qualitative capabilities of 2D electrophoresis. Based on this 2D-DIGE strategy, we have improved the conventional SERPA by developing a new and entirely fluorescence-based bi-dimensional immunoproteomic (FBIP) analysis, performed with three fluorescent dyes. To optimize the alignment of the different antigenic maps, we introduced a landmark map composed of a combination of specific antibodies. This methodological development allows simultaneous revelation of the antigenic, landmark and proteomic maps on each immunoblot. A computer-assisted process using commercially available software automatically leads to the superimposition of the different maps, ensuring accurate localization of antigenic spots of interest. PMID:26132557

  6. Digital test signal generation: An accurate SNR calibration approach for the DSN

    NASA Technical Reports Server (NTRS)

    Gutierrez-Luaces, Benito O.

    1993-01-01

    In support of the on-going automation of the Deep Space Network (DSN) a new method of generating analog test signals with accurate signal-to-noise ratio (SNR) is described. High accuracy is obtained by simultaneous generation of digital noise and signal spectra at the desired bandwidth (base-band or bandpass). The digital synthesis provides a test signal embedded in noise with the statistical properties of a stationary random process. Accuracy is dependent on test integration time and limited only by the system quantization noise (0.02 dB). The monitor and control as well as signal-processing programs reside in a personal computer (PC). Commands are transmitted to properly configure the specially designed high-speed digital hardware. The prototype can generate either two data channels modulated or not on a subcarrier, or one QPSK channel, or a residual carrier with one biphase data channel. The analog spectrum generated is on the DC to 10 MHz frequency range. These spectra may be up-converted to any desired frequency without loss on the characteristics of the SNR provided. Test results are presented.

  7. Computer Forensics Education - the Open Source Approach

    NASA Astrophysics Data System (ADS)

    Huebner, Ewa; Bem, Derek; Cheung, Hon

    In this chapter we discuss the application of the open source software tools in computer forensics education at tertiary level. We argue that open source tools are more suitable than commercial tools, as they provide the opportunity for students to gain in-depth understanding and appreciation of the computer forensic process as opposed to familiarity with one software product, however complex and multi-functional. With the access to all source programs the students become more than just the consumers of the tools as future forensic investigators. They can also examine the code, understand the relationship between the binary images and relevant data structures, and in the process gain necessary background to become the future creators of new and improved forensic software tools. As a case study we present an advanced subject, Computer Forensics Workshop, which we designed for the Bachelor's degree in computer science at the University of Western Sydney. We based all laboratory work and the main take-home project in this subject on open source software tools. We found that without exception more than one suitable tool can be found to cover each topic in the curriculum adequately. We argue that this approach prepares students better for forensic field work, as they gain confidence to use a variety of tools, not just a single product they are familiar with.

  8. A fourth order accurate finite difference scheme for the computation of elastic waves

    NASA Technical Reports Server (NTRS)

    Bayliss, A.; Jordan, K. E.; Lemesurier, B. J.; Turkel, E.

    1986-01-01

    A finite difference for elastic waves is introduced. The model is based on the first order system of equations for the velocities and stresses. The differencing is fourth order accurate on the spatial derivatives and second order accurate in time. The model is tested on a series of examples including the Lamb problem, scattering from plane interf aces and scattering from a fluid-elastic interface. The scheme is shown to be effective for these problems. The accuracy and stability is insensitive to the Poisson ratio. For the class of problems considered here it is found that the fourth order scheme requires for two-thirds to one-half the resolution of a typical second order scheme to give comparable accuracy.

  9. Development and Validation of a Fast, Accurate and Cost-Effective Aeroservoelastic Method on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Goodwin, Sabine A.; Raj, P.

    1999-01-01

    Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.

  10. Development and Validation of a Fast, Accurate and Cost-Effective Aeroservoelastic Method on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Goodwin, Sabine A.; Raj, P.

    1999-01-01

    Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.

  11. Computational Approaches for Predicting Biomedical Research Collaborations

    PubMed Central

    Zhang, Qing; Yu, Hong

    2014-01-01

    Biomedical research is increasingly collaborative, and successful collaborations often produce high impact work. Computational approaches can be developed for automatically predicting biomedical research collaborations. Previous works of collaboration prediction mainly explored the topological structures of research collaboration networks, leaving out rich semantic information from the publications themselves. In this paper, we propose supervised machine learning approaches to predict research collaborations in the biomedical field. We explored both the semantic features extracted from author research interest profile and the author network topological features. We found that the most informative semantic features for author collaborations are related to research interest, including similarity of out-citing citations, similarity of abstracts. Of the four supervised machine learning models (naïve Bayes, naïve Bayes multinomial, SVMs, and logistic regression), the best performing model is logistic regression with an ROC ranging from 0.766 to 0.980 on different datasets. To our knowledge we are the first to study in depth how research interest and productivities can be used for collaboration prediction. Our approach is computationally efficient, scalable and yet simple to implement. The datasets of this study are available at https://github.com/qingzhanggithub/medline-collaboration-datasets. PMID:25375164

  12. Requirements for accurate estimation of anisotropic material parameters by magnetic resonance elastography: A computational study.

    PubMed

    Tweten, D J; Okamoto, R J; Bayly, P V

    2017-01-17

    To establish the essential requirements for characterization of a transversely isotropic material by magnetic resonance elastography (MRE). Three methods for characterizing nearly incompressible, transversely isotropic (ITI) materials were used to analyze data from closed-form expressions for traveling waves, finite-element (FE) simulations of waves in homogeneous ITI material, and FE simulations of waves in heterogeneous material. Key properties are the complex shear modulus μ2 , shear anisotropy ϕ=μ1/μ2-1, and tensile anisotropy ζ=E1/E2-1. Each method provided good estimates of ITI parameters when both slow and fast shear waves with multiple propagation directions were present. No method gave accurate estimates when the displacement field contained only slow shear waves, only fast shear waves, or waves with only a single propagation direction. Methods based on directional filtering are robust to noise and include explicit checks of propagation and polarization. Curl-based methods led to more accurate estimates in low noise conditions. Parameter estimation in heterogeneous materials is challenging for all methods. Multiple shear waves, both slow and fast, with different propagation directions, must be present in the displacement field for accurate parameter estimates in ITI materials. Experimental design and data analysis can ensure that these requirements are met. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. Computational Approach to Hyperelliptic Riemann Surfaces

    NASA Astrophysics Data System (ADS)

    Frauendiener, Jörg; Klein, Christian

    2015-03-01

    We present a computational approach to general hyperelliptic Riemann surfaces in Weierstrass normal form. The surface is given by a list of the branch points, the coefficients of the defining polynomial or a system of cuts for the curve. A canonical basis of the homology is introduced algorithmically for this curve. The periods of the holomorphic differentials and the Abel map are computed with the Clenshaw-Curtis method to achieve spectral accuracy. The code can handle almost degenerate Riemann surfaces. This work generalizes previous work on real hyperelliptic surfaces with prescribed cuts to arbitrary hyperelliptic surfaces. As an example, solutions to the sine-Gordon equation in terms of multi-dimensional theta functions are studied, also in the solitonic limit of these solutions.

  14. Computational approaches to fMRI analysis.

    PubMed

    Cohen, Jonathan D; Daw, Nathaniel; Engelhardt, Barbara; Hasson, Uri; Li, Kai; Niv, Yael; Norman, Kenneth A; Pillow, Jonathan; Ramadge, Peter J; Turk-Browne, Nicholas B; Willke, Theodore L

    2017-02-23

    Analysis methods in cognitive neuroscience have not always matched the richness of fMRI data. Early methods focused on estimating neural activity within individual voxels or regions, averaged over trials or blocks and modeled separately in each participant. This approach mostly neglected the distributed nature of neural representations over voxels, the continuous dynamics of neural activity during tasks, the statistical benefits of performing joint inference over multiple participants and the value of using predictive models to constrain analysis. Several recent exploratory and theory-driven methods have begun to pursue these opportunities. These methods highlight the importance of computational techniques in fMRI analysis, especially machine learning, algorithmic optimization and parallel computing. Adoption of these techniques is enabling a new generation of experiments and analyses that could transform our understanding of some of the most complex-and distinctly human-signals in the brain: acts of cognition such as thoughts, intentions and memories.

  15. A computational language approach to modeling prose recall in schizophrenia.

    PubMed

    Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W; Elvevåg, Brita

    2014-06-01

    Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall.

  16. A computational language approach to modeling prose recall in schizophrenia

    PubMed Central

    Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W.; Elvevåg, Brita

    2014-01-01

    Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall. PMID:24709122

  17. Accurate and Efficient Calculation of van der Waals Interactions Within Density Functional Theory by Local Atomic Potential Approach

    SciTech Connect

    Sun, Y. Y.; Kim, Y. H.; Lee, K.; Zhang, S. B.

    2008-01-01

    Density functional theory (DFT) in the commonly used local density or generalized gradient approximation fails to describe van der Waals (vdW) interactions that are vital to organic, biological, and other molecular systems. Here, we propose a simple, efficient, yet accurate local atomic potential (LAP) approach, named DFT+LAP, for including vdW interactions in the framework of DFT. The LAPs for H, C, N, and O are generated by fitting the DFT+LAP potential energy curves of small molecule dimers to those obtained from coupled cluster calculations with single, double, and perturbatively treated triple excitations, CCSD(T). Excellent transferability of the LAPs is demonstrated by remarkable agreement with the JSCH-2005 benchmark database [P. Jurecka et al. Phys. Chem. Chem. Phys. 8, 1985 (2006)], which provides the interaction energies of CCSD(T) quality for 165 vdW and hydrogen-bonded complexes. For over 100 vdW dominant complexes in this database, our DFT+LAP calculations give a mean absolute deviation from the benchmark results less than 0.5 kcal/mol. The DFT+LAP approach involves no extra computational cost other than standard DFT calculations and no modification of existing DFT codes, which enables straightforward quantum simulations, such as ab initio molecular dynamics, on biomolecular systems, as well as on other organic systems.

  18. Accurate and efficient calculation of van der Waals interactions within density functional theory by local atomic potential approach.

    PubMed

    Sun, Y Y; Kim, Yong-Hyun; Lee, Kyuho; Zhang, S B

    2008-10-21

    Density functional theory (DFT) in the commonly used local density or generalized gradient approximation fails to describe van der Waals (vdW) interactions that are vital to organic, biological, and other molecular systems. Here, we propose a simple, efficient, yet accurate local atomic potential (LAP) approach, named DFT+LAP, for including vdW interactions in the framework of DFT. The LAPs for H, C, N, and O are generated by fitting the DFT+LAP potential energy curves of small molecule dimers to those obtained from coupled cluster calculations with single, double, and perturbatively treated triple excitations, CCSD(T). Excellent transferability of the LAPs is demonstrated by remarkable agreement with the JSCH-2005 benchmark database [P. Jurecka et al. Phys. Chem. Chem. Phys. 8, 1985 (2006)], which provides the interaction energies of CCSD(T) quality for 165 vdW and hydrogen-bonded complexes. For over 100 vdW dominant complexes in this database, our DFT+LAP calculations give a mean absolute deviation from the benchmark results less than 0.5 kcal/mol. The DFT+LAP approach involves no extra computational cost other than standard DFT calculations and no modification of existing DFT codes, which enables straightforward quantum simulations, such as ab initio molecular dynamics, on biomolecular systems, as well as on other organic systems.

  19. FAMBE-pH: a fast and accurate method to compute the total solvation free energies of proteins.

    PubMed

    Vorobjev, Yury N; Vila, Jorge A; Scheraga, Harold A

    2008-09-04

    A fast and accurate method to compute the total solvation free energies of proteins as a function of pH is presented. The method makes use of a combination of approaches, some of which have already appeared in the literature; (i) the Poisson equation is solved with an optimized fast adaptive multigrid boundary element (FAMBE) method; (ii) the electrostatic free energies of the ionizable sites are calculated for their neutral and charged states by using a detailed model of atomic charges; (iii) a set of optimal atomic radii is used to define a precise dielectric surface interface; (iv) a multilevel adaptive tessellation of this dielectric surface interface is achieved by using multisized boundary elements; and (v) 1:1 salt effects are included. The equilibrium proton binding/release is calculated with the Tanford-Schellman integral if the proteins contain more than approximately 20-25 ionizable groups; for a smaller number of ionizable groups, the ionization partition function is calculated directly. The FAMBE method is tested as a function of pH (FAMBE-pH) with three proteins, namely, bovine pancreatic trypsin inhibitor (BPTI), hen egg white lysozyme (HEWL), and bovine pancreatic ribonuclease A (RNaseA). The results are (a) the FAMBE-pH method reproduces the observed pK a's of the ionizable groups of these proteins within an average absolute value of 0.4 p K units and a maximum error of 1.2 p K units and (b) comparison of the calculated total pH-dependent solvation free energy for BPTI, between the exact calculation of the ionization partition function and the Tanford-Schellman integral method, shows agreement within 1.2 kcal/mol. These results indicate that calculation of total solvation free energies with the FAMBE-pH method can provide an accurate prediction of protein conformational stability at a given fixed pH and, if coupled with molecular mechanics or molecular dynamics methods, can also be used for more realistic studies of protein folding, unfolding, and

  20. A computationally efficient method for accurately solving the EEG forward problem in a finely discretized head model.

    PubMed

    Neilson, Lora A; Kovalyov, Mikhail; Koles, Zoltan J

    2005-10-01

    Solution of the forward problem using realistic head models is necessary for accurate EEG source analysis. Realistic models are usually derived from volumetric magnetic resonance images that provide a voxel resolution of about 1 mm3. Electrical models could, therefore contain, for a normal adult head, over 4 million elements. Solution of the forward problem using models of this magnitude has so far been impractical due to issues of computation time and memory. A preconditioner is proposed for the conjugate-gradient method that enables the forward problem to be solved using head models of this magnitude. It is applied to the system matrix constructed from the head anatomy using finite differences. The preconditioner is not computed explicitly and so is very efficient in terms of memory utilization. Using a spherical head model discretized into over 4 million volumes, we have been able to obtain accurate forward solutions in about 60 min on a 1 GHz Pentium III. L2 accuracy of the solutions was better than 2%. Accurate solution of the forward problem in EEG in a finely discretized head model is practical in terms of computation time and memory. The results represent an important step in head modeling for EEG source analysis.

  1. Sculpting the band gap: a computational approach

    NASA Astrophysics Data System (ADS)

    Prasai, Kiran; Biswas, Parthapratim; Drabold, D. A.

    2015-10-01

    Materials with optimized band gap are needed in many specialized applications. In this work, we demonstrate that Hellmann-Feynman forces associated with the gap states can be used to find atomic coordinates that yield desired electronic density of states. Using tight-binding models, we show that this approach may be used to arrive at electronically designed models of amorphous silicon and carbon. We provide a simple recipe to include a priori electronic information in the formation of computer models of materials, and prove that this information may have profound structural consequences. The models are validated with plane-wave density functional calculations.

  2. Sculpting the band gap: a computational approach

    PubMed Central

    Prasai, Kiran; Biswas, Parthapratim; Drabold, D. A.

    2015-01-01

    Materials with optimized band gap are needed in many specialized applications. In this work, we demonstrate that Hellmann-Feynman forces associated with the gap states can be used to find atomic coordinates that yield desired electronic density of states. Using tight-binding models, we show that this approach may be used to arrive at electronically designed models of amorphous silicon and carbon. We provide a simple recipe to include a priori electronic information in the formation of computer models of materials, and prove that this information may have profound structural consequences. The models are validated with plane-wave density functional calculations. PMID:26490203

  3. Time-Accurate Computations of Isolated Circular Synthetic Jets in Crossflow

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Schaeffler, N. W.; Milanovic, I. M.; Zaman, K. B. M. Q.

    2007-01-01

    Results from unsteady Reynolds-averaged Navier-Stokes computations are described for two different synthetic jet flows issuing into a turbulent boundary layer crossflow through a circular orifice. In one case the jet effect is mostly contained within the boundary layer, while in the other case the jet effect extends beyond the boundary layer edge. Both cases have momentum flux ratios less than 2. Several numerical parameters are investigated, and some lessons learned regarding the CFD methods for computing these types of flow fields are summarized. Results in both cases are compared to experiment.

  4. Time-Accurate Computations of Isolated Circular Synthetic Jets in Crossflow

    NASA Technical Reports Server (NTRS)

    Rumsey, Christoper L.; Schaeffler, Norman W.; Milanovic, I. M.; Zaman, K. B. M. Q.

    2005-01-01

    Results from unsteady Reynolds-averaged Navier-Stokes computations are described for two different synthetic jet flows issuing into a turbulent boundary layer crossflow through a circular orifice. In one case the jet effect is mostly contained within the boundary layer, while in the other case the jet effect extends beyond the boundary layer edge. Both cases have momentum flux ratios less than 2. Several numerical parameters are investigated, and some lessons learned regarding the CFD methods for computing these types of flow fields are outlined. Results in both cases are compared to experiment.

  5. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    SciTech Connect

    Bonetto, Paola; Qi, Jinyi; Leahy, Richard M.

    1999-10-01

    We describe a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, we derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. We show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow us to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.

  6. Toroidal figures of equilibrium from a second-order accurate, accelerated SCF method with subgrid approach

    NASA Astrophysics Data System (ADS)

    Huré, J.-M.; Hersant, F.

    2017-02-01

    We compute the structure of a self-gravitating torus with polytropic equation of state (EOS) rotating in an imposed centrifugal potential. The Poisson solver is based on isotropic multigrid with optimal covering factor (fluid section-to-grid area ratio). We work at second order in the grid resolution for both finite difference and quadrature schemes. For soft EOS (i.e. polytropic index n ≥ 1), the underlying second order is naturally recovered for boundary values and any other integrated quantity sensitive to the mass density (mass, angular momentum, volume, virial parameter, etc.), i.e. errors vary with the number N of nodes per direction as ˜1/N2. This is, however, not observed for purely geometrical quantities (surface area, meridional section area, volume), unless a subgrid approach is considered (i.e. boundary detection). Equilibrium sequences are also much better described, especially close to critical rotation. Yet another technical effort is required for hard EOS (n < 1), due to infinite mass density gradients at the fluid surface. We fix the problem by using kernel splitting. Finally, we propose an accelerated version of the self-consistent field (SCF) algorithm based on a node-by-node pre-conditioning of the mass density at each step. The computing time is reduced by a factor of 2 typically, regardless of the polytropic index. There is a priori no obstacle to applying these results and techniques to ellipsoidal configurations and even to 3D configurations.

  7. Computational methods toward accurate RNA structure prediction using coarse-grained and all-atom models.

    PubMed

    Krokhotin, Andrey; Dokholyan, Nikolay V

    2015-01-01

    Computational methods can provide significant insights into RNA structure and dynamics, bridging the gap in our understanding of the relationship between structure and biological function. Simulations enrich and enhance our understanding of data derived on the bench, as well as provide feasible alternatives to costly or technically challenging experiments. Coarse-grained computational models of RNA are especially important in this regard, as they allow analysis of events occurring in timescales relevant to RNA biological function, which are inaccessible through experimental methods alone. We have developed a three-bead coarse-grained model of RNA for discrete molecular dynamics simulations. This model is efficient in de novo prediction of short RNA tertiary structure, starting from RNA primary sequences of less than 50 nucleotides. To complement this model, we have incorporated additional base-pairing constraints and have developed a bias potential reliant on data obtained from hydroxyl probing experiments that guide RNA folding to its correct state. By introducing experimentally derived constraints to our computer simulations, we are able to make reliable predictions of RNA tertiary structures up to a few hundred nucleotides. Our refined model exemplifies a valuable benefit achieved through integration of computation and experimental methods.

  8. MULTICORR: A Computer Program for Fast, Accurate, Small-Sample Testing of Correlational Pattern Hypotheses.

    ERIC Educational Resources Information Center

    Steiger, James H.

    1979-01-01

    The program presented computes a chi-square statistic for testing pattern hypotheses on correlation matrices. The statistic is based on a multivariate generalization of the Fisher r-to-z transformation. This statistic has small sample performance which is superior to an analogous likelihood ratio statistic obtained via the analysis of covariance…

  9. Computer subroutine ISUDS accurately solves large system of simultaneous linear algebraic equations

    NASA Technical Reports Server (NTRS)

    Collier, G.

    1967-01-01

    Computer program, an Iterative Scheme Using a Direct Solution, obtains double precision accuracy using a single-precision coefficient matrix. ISUDS solves a system of equations written in matrix form as AX equals B, where A is a square non-singular coefficient matrix, X is a vector, and B is a vector.

  10. Accurate predictions of the energetics of silicon compounds using the multireference correlation consistent composite approach

    NASA Astrophysics Data System (ADS)

    Oyedepo, Gbenga A.; Peterson, Charles; Wilson, Angela K.

    2011-09-01

    Theoretical studies, using the multireference correlation consistent composite approach (MR-ccCA), have been carried out on the ground and lowest lying spin-forbidden excited states of a series of silicon-containing systems. The MR-ccCA method is the multireference equivalent of the successful single reference ccCA method that has been shown to produce chemically accurate (within ±1.0 kcal mol-1 of reliable, well-established experiment) results. The percentage contributions of the SCF configurations to complete active space self-consistent field wave functions together with the Frobenius norm of the t1 vectors and related D1 diagnostics of the coupled-cluster single double wave function with the cc-pVTZ basis set have been utilized to illustrate the multi-configurational characteristics of the compounds considered. MR-ccCA incorporates additive terms to account for relativistic effects, atomic spin-orbit coupling, scalar relativistic effects, and core-valence correlation. MR-ccCA has been utilized to predict the atomization energies, enthalpies of formation, and the lowest energy spin-forbidden transitions for SinXm (2 ≤ n + m ≥ 3 where n ≠ 0 and X = B, C, N, Al, P), silicon hydrides, and analogous compounds of carbon. The energetics of small silicon aluminides and phosphorides are predicted for the first time.

  11. Is the Separable Propagator Perturbation Approach Accurate in Calculating Angle Resolved Photoelectron Diffraction Spectra?

    NASA Astrophysics Data System (ADS)

    Ng, C. N.; Chu, T. P.; Wu, Huasheng; Tong, S. Y.; Huang, Hong

    1997-03-01

    We compare multiple scattering results of angle-resolved photoelectron diffraction spectra between the exact slab method and the separable propagator perturbation method. In the slab method,footnote C.H. Li, A.R. Lubinsky and S.Y. Tong, Phys. Rev. B17, 3128 (1978). the source wave and multiple scattering within the strong scattering atomic layers are expanded in spherical waves while interlayer scattering is expressed in plane waves. The transformation between spherical waves and plane waves is done exactly. The plane waves are then matched across the solid-vacuum interface to a single outgoing plane wave in the detector's direction. The separable propagator perturbation approach uses two approximations: (i) A separable representation of the Green's function propagator and (ii) A perturbation expansion of multiple scattering terms. Results of c(2x2) S-Ni(001) show that this approximate method fails to converge due to the very slow convergence of the separable representation for scattering angles less than 90^circ. However, this method is accurate in the backscattering regime and may be applied to XAFS calculations.(J.J. Rehr and R.C. Albers, Phys. Rev. B41, 8139 (1990).) The use of this method for angle-resolved photoelectron diffraction spectra is substantially less reliable.

  12. ACCURATE SOLUTION AND GRADIENT COMPUTATION FOR ELLIPTIC INTERFACE PROBLEMS WITH VARIABLE COEFFICIENTS

    PubMed Central

    LI, ZHILIN; JI, HAIFENG; CHEN, XIAOHONG

    2016-01-01

    A new augmented method is proposed for elliptic interface problems with a piecewise variable coefficient that has a finite jump across a smooth interface. The main motivation is not only to get a second order accurate solution but also a second order accurate gradient from each side of the interface. The key of the new method is to introduce the jump in the normal derivative of the solution as an augmented variable and re-write the interface problem as a new PDE that consists of a leading Laplacian operator plus lower order derivative terms near the interface. In this way, the leading second order derivatives jump relations are independent of the jump in the coefficient that appears only in the lower order terms after the scaling. An upwind type discretization is used for the finite difference discretization at the irregular grid points near or on the interface so that the resulting coefficient matrix is an M-matrix. A multi-grid solver is used to solve the linear system of equations and the GMRES iterative method is used to solve the augmented variable. Second order convergence for the solution and the gradient from each side of the interface has also been proved in this paper. Numerical examples for general elliptic interface problems have confirmed the theoretical analysis and efficiency of the new method. PMID:28983130

  13. ACCURATE SOLUTION AND GRADIENT COMPUTATION FOR ELLIPTIC INTERFACE PROBLEMS WITH VARIABLE COEFFICIENTS.

    PubMed

    Li, Zhilin; Ji, Haifeng; Chen, Xiaohong

    2017-01-01

    A new augmented method is proposed for elliptic interface problems with a piecewise variable coefficient that has a finite jump across a smooth interface. The main motivation is not only to get a second order accurate solution but also a second order accurate gradient from each side of the interface. The key of the new method is to introduce the jump in the normal derivative of the solution as an augmented variable and re-write the interface problem as a new PDE that consists of a leading Laplacian operator plus lower order derivative terms near the interface. In this way, the leading second order derivatives jump relations are independent of the jump in the coefficient that appears only in the lower order terms after the scaling. An upwind type discretization is used for the finite difference discretization at the irregular grid points near or on the interface so that the resulting coefficient matrix is an M-matrix. A multi-grid solver is used to solve the linear system of equations and the GMRES iterative method is used to solve the augmented variable. Second order convergence for the solution and the gradient from each side of the interface has also been proved in this paper. Numerical examples for general elliptic interface problems have confirmed the theoretical analysis and efficiency of the new method.

  14. Computational approaches to motor learning by imitation.

    PubMed Central

    Schaal, Stefan; Ijspeert, Auke; Billard, Aude

    2003-01-01

    Movement imitation requires a complex set of mechanisms that map an observed movement of a teacher onto one's own movement apparatus. Relevant problems include movement recognition, pose estimation, pose tracking, body correspondence, coordinate transformation from external to egocentric space, matching of observed against previously learned movement, resolution of redundant degrees-of-freedom that are unconstrained by the observation, suitable movement representations for imitation, modularization of motor control, etc. All of these topics by themselves are active research problems in computational and neurobiological sciences, such that their combination into a complete imitation system remains a daunting undertaking-indeed, one could argue that we need to understand the complete perception-action loop. As a strategy to untangle the complexity of imitation, this paper will examine imitation purely from a computational point of view, i.e. we will review statistical and mathematical approaches that have been suggested for tackling parts of the imitation problem, and discuss their merits, disadvantages and underlying principles. Given the focus on action recognition of other contributions in this special issue, this paper will primarily emphasize the motor side of imitation, assuming that a perceptual system has already identified important features of a demonstrated movement and created their corresponding spatial information. Based on the formalization of motor control in terms of control policies and their associated performance criteria, useful taxonomies of imitation learning can be generated that clarify different approaches and future research directions. PMID:12689379

  15. Solubility of nonelectrolytes: a first-principles computational approach.

    PubMed

    Jackson, Nicholas E; Chen, Lin X; Ratner, Mark A

    2014-05-15

    Using a combination of classical molecular dynamics and symmetry adapted intermolecular perturbation theory, we develop a high-accuracy computational method for examining the solubility energetics of nonelectrolytes. This approach is used to accurately compute the cohesive energy density and Hildebrand solubility parameters of 26 molecular liquids. The energy decomposition of symmetry adapted perturbation theory is then utilized to develop multicomponent Hansen-like solubility parameters. These parameters are shown to reproduce the solvent categorizations (nonpolar, polar aprotic, or polar protic) of all molecular liquids studied while lending quantitative rigor to these qualitative categorizations via the introduction of simple, easily computable parameters. Notably, we find that by monitoring the first-order exchange energy contribution to the total interaction energy, one can rigorously determine the hydrogen bonding character of a molecular liquid. Finally, this method is applied to compute explicitly the Flory interaction parameter and the free energy of mixing for two different small molecule mixtures, reproducing the known miscibilities. This methodology represents an important step toward the prediction of molecular solubility from first principles.

  16. Computational approaches for RNA energy parameter estimation

    PubMed Central

    Andronescu, Mirela; Condon, Anne; Hoos, Holger H.; Mathews, David H.; Murphy, Kevin P.

    2010-01-01

    Methods for efficient and accurate prediction of RNA structure are increasingly valuable, given the current rapid advances in understanding the diverse functions of RNA molecules in the cell. To enhance the accuracy of secondary structure predictions, we developed and refined optimization techniques for the estimation of energy parameters. We build on two previous approaches to RNA free-energy parameter estimation: (1) the Constraint Generation (CG) method, which iteratively generates constraints that enforce known structures to have energies lower than other structures for the same molecule; and (2) the Boltzmann Likelihood (BL) method, which infers a set of RNA free-energy parameters that maximize the conditional likelihood of a set of reference RNA structures. Here, we extend these approaches in two main ways: We propose (1) a max-margin extension of CG, and (2) a novel linear Gaussian Bayesian network that models feature relationships, which effectively makes use of sparse data by sharing statistical strength between parameters. We obtain significant improvements in the accuracy of RNA minimum free-energy pseudoknot-free secondary structure prediction when measured on a comprehensive set of 2518 RNA molecules with reference structures. Our parameters can be used in conjunction with software that predicts RNA secondary structures, RNA hybridization, or ensembles of structures. Our data, software, results, and parameter sets in various formats are freely available at http://www.cs.ubc.ca/labs/beta/Projects/RNA-Params. PMID:20940338

  17. Matrix-vector multiplication using digital partitioning for more accurate optical computing

    NASA Technical Reports Server (NTRS)

    Gary, C. K.

    1992-01-01

    Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.

  18. An Accurate Method to Compute the Parasitic Electromagnetic Radiations of Real Solar Panels

    NASA Astrophysics Data System (ADS)

    Andreiu, G.; Panh, J.; Reineix, A.; Pelissou, P.; Girard, C.; Delannoy, P.; Romeuf, X.; Schmitt, D.

    2012-05-01

    The methodology [1] able to compute the parasitic electromagnetic (EM) radiations of a solar panel is highly improved in this paper to model real solar panels. Thus, honeycomb composite panels, triple junction solar cells and serie or shunt regulation system can now be taken into account. After a brief summary of the methodology, the improvements are detailed. Finally, some encouraging frequency and time-domain results of magnetic field emitted by a real solar panel are presented.

  19. Pixelated phase computer holograms for the accurate encoding of scalar complex fields.

    PubMed

    Arrizón, Victor; Ruiz, Ulises; Carrada, Rosibel; González, Luis A

    2007-11-01

    We discuss a class of phase computer-generated holograms for the encoding of arbitrary scalar complex fields. We describe two holograms of this class that allow high quality reconstruction of the encoded field, even if they are implemented with a low-resolution pixelated phase modulator. In addition, we show that one of these holograms can be appropriately implemented with a phase modulator limited by a reduced phase depth.

  20. Accurate computation and continuation of homoclinic and heteroclinic orbits for singular perturbation problems

    NASA Technical Reports Server (NTRS)

    Vaughan, William W.; Friedman, Mark J.; Monteiro, Anand C.

    1993-01-01

    In earlier papers, Doedel and the authors have developed a numerical method and derived error estimates for the computation of branches of heteroclinic orbits for a system of autonomous ordinary differential equations in R(exp n). The idea of the method is to reduce a boundary value problem on the real line to a boundary value problem on a finite interval by using a local (linear or higher order) approximation of the stable and unstable manifolds. A practical limitation for the computation of homoclinic and heteroclinic orbits has been the difficulty in obtaining starting orbits. Typically these were obtained from a closed form solution or via a homotopy from a known solution. Here we consider extensions of our algorithm which allow us to obtain starting orbits on the continuation branch in a more systematic way as well as make the continuation algorithm more flexible. In applications, we use the continuation software package AUTO in combination with some initial value software. The examples considered include computation of homoclinic orbits in a singular perturbation problem and in a turbulent fluid boundary layer in the wall region problem.

  1. Numerical Computation of a Continuous-thrust State Transition Matrix Incorporating Accurate Hardware and Ephemeris Models

    NASA Technical Reports Server (NTRS)

    Ellison, Donald; Conway, Bruce; Englander, Jacob

    2015-01-01

    A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.

  2. Accurate computation of weights in classical Gauss-Christoffel quadrature rules

    SciTech Connect

    Yakimiw, E.

    1996-12-01

    For many classical Gauss-Christoffel quadrature rules there does not exist a method which guarantees a uniform level of accuracy for the Gaussian quadrature weights at all quadrature nodes unless the nodes are known exactly. More disturbing, some algebraic expressions for these weights exhibit an excessive sensitivity to even the smallest perturbations in the node location. This sensitivity rapidly increases with high order quadrature rules. Current uses of very high order quadratures are common with the advent of more powerful computers, and a loss of accuracy in the weights has become a problem and must be addressed. A simple but efficient and general method for improving the accuracy of the computation of the quadrature weights even though the nodes may carry a significant large error. In addition, a highly efficient root-finding iterative technique with superlinear converging rates for computing the nodes is developed. It uses solely the quadrature polynomials and their first derivatives. A comparison of this method with the eigenvalue method of Golub and Welsh implemented in most standard software libraries is made. The proposed method outperforms the latter from the point of view of both accuracy and efficiency. The Legendre, Lobatto, Radau, Hermite, and Laguerre quadrature rules are examined. 22 refs., 7 figs., 5 tabs.

  3. Iofetamine I 123 single photon emission computed tomography is accurate in the diagnosis of Alzheimer's disease

    SciTech Connect

    Johnson, K.A.; Holman, B.L.; Rosen, T.J.; Nagel, J.S.; English, R.J.; Growdon, J.H. )

    1990-04-01

    To determine the diagnostic accuracy of iofetamine hydrochloride I 123 (IMP) with single photon emission computed tomography in Alzheimer's disease, we studied 58 patients with AD and 15 age-matched healthy control subjects. We used a qualitative method to assess regional IMP uptake in the entire brain and to rate image data sets as normal or abnormal without knowledge of subjects'clinical classification. The sensitivity and specificity of IMP with single photon emission computed tomography in AD were 88% and 87%, respectively. In 15 patients with mild cognitive deficits (Blessed Dementia Scale score, less than or equal to 10), sensitivity was 80%. With the use of a semiquantitative measure of regional cortical IMP uptake, the parietal lobes were the most functionally impaired in AD and the most strongly associated with the patients' Blessed Dementia Scale scores. These results indicated that IMP with single photon emission computed tomography may be a useful adjunct in the clinical diagnosis of AD in early, mild disease.

  4. A displacement gradient BEM for accurate stress computation near boundaries in 2-D anisotropic problems

    NASA Technical Reports Server (NTRS)

    Sistla, R.; Raju, I. S.; Krishnamurthy, T.

    1993-01-01

    A displacement gradient method for 2D anisotropic elasticity problems is presented, which effectively minimizes the boundary layer effect through a two-step procedure. First, the boundary integral equations are solved for the unknown boundary displacements and tractions. Second, a direct integral equation for displacement gradients is developed in terms of boundary tractions. Three methods based on different evaluation procedures and locations for determining the displacement gradients are proposed. In the first method the displacement gradients are averaged at nodes common to adjacent elements. The second method stores the gradients element-wise. In the third method, the gradients are evaluated at the nodes of discontinuous elements. The three methods are applied to near-isotropic plates with circular and elliptic cutouts. It is concluded that all three methods can yield accurate stress distributions.

  5. Necessary conditions for accurate computations of three-body partial decay widths

    NASA Astrophysics Data System (ADS)

    Garrido, E.; Jensen, A. S.; Fedorov, D. V.

    2008-09-01

    The partial width for decay of a resonance into three fragments is largely determined at distances where the energy is smaller than the effective potential producing the corresponding wave function. At short distances the many-body properties are accounted for by preformation or spectroscopic factors. We use the adiabatic expansion method combined with the WKB approximation to obtain the indispensable cluster model wave functions at intermediate and larger distances. We test the concept by deriving conditions for the minimal basis expressed in terms of partial waves and radial nodes. We compare results for different effective interactions and methods. Agreement is found with experimental values for a sufficiently large basis. We illustrate the ideas with realistic examples from α emission of C12 and two-proton emission of Ne17. Basis requirements for accurate momentum distributions are briefly discussed.

  6. Accurate and Scalable O(N) Algorithm for First-Principles Molecular-Dynamics Computations on Large Parallel Computers

    SciTech Connect

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-01-01

    We present the first truly scalable first-principles molecular dynamics algorithm with O(N) complexity and controllable accuracy, capable of simulating systems with finite band gaps of sizes that were previously impossible with this degree of accuracy. By avoiding global communications, we provide a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wave functions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 101 952 atoms on 23 328 processors, with a wall-clock time of the order of 1 min per molecular dynamics time step and numerical error on the forces of less than 7x10-4 Ha/Bohr.

  7. Accurate and scalable O(N) algorithm for first-principles molecular-dynamics computations on large parallel computers.

    PubMed

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-01-31

    We present the first truly scalable first-principles molecular dynamics algorithm with O(N) complexity and controllable accuracy, capable of simulating systems with finite band gaps of sizes that were previously impossible with this degree of accuracy. By avoiding global communications, we provide a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wave functions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 101,952 atoms on 23,328 processors, with a wall-clock time of the order of 1 min per molecular dynamics time step and numerical error on the forces of less than 7×10(-4)  Ha/Bohr.

  8. Accurate quantification of width and density of bone structures by computed tomography.

    PubMed

    Hangartner, Thomas N; Short, David F

    2007-10-01

    In computed tomography (CT), the representation of edges between objects of different densities is influenced by the limited spatial resolution of the scanner. This results in the misrepresentation of density of narrow objects, leading to errors of up to 70% and more. Our interest is in the imaging and measurement of narrow bone structures, and the issues are the same for imaging with clinical CT scanners, peripheral quantitative CT scanners or micro CT scanners. Mathematical models, phantoms and tests with patient data led to the following procedures: (i) extract density profiles at one-degree increments from the CT images at right angles to the bone boundary; (ii) consider the outer and inner edge of each profile separately due to different adjacent soft tissues; (iii) measure the width of each profile based on a threshold at fixed percentage of the difference between the soft-tissue value and a first approximated bone value; (iv) correct the underlying material density of bone for each profile based on the measured width with the help of the density-versus-width curve obtained from computer simulations and phantom measurements. This latter curve is specific to a certain scanner and is not dependent on the densities of the tissues within the range seen in patients. This procedure allows the calculation of the material density of bone. Based on phantom measurements, we estimate the density error to be below 2% relative to the density of normal bone and the bone-width error about one tenth of a pixel size.

  9. Making it Easy to Construct Accurate Hydrological Models that Exploit High Performance Computers (Invited)

    NASA Astrophysics Data System (ADS)

    Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.

    2013-12-01

    This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.

  10. Design and highly accurate 3D displacement characterization of monolithic SMA microgripper using computer vision

    NASA Astrophysics Data System (ADS)

    Bellouard, Yves; Sulzmann, Armin; Jacot, Jacques; Clavel, Reymond

    1998-01-01

    In the robotics field, several grippers have been developed using SMA technologies, but, so far, SMA is only used as the actuating part of the mechanical device. However mechanical device requires assembly and in some cases this means friction. In the case of micro-grippers, this becomes a major problem due to the small size of the components. In this paper, a new monolithic concept of micro-gripper is presented. This concept is applied to the grasping of sub- millimeter optical elements such as Selfoc lenses and the fastening of optical fibers. Measurements are performed using a newly developed high precision 3D-computer vision tracking system to characterize the spatial positions of the micro-gripper in action. To characterize relative motion of the micro-gripper the natural texture of the micro-gripper is used to compute 3D displacement. The microscope image CCD receivers high frequency changes in light intensity from the surface of the ripper. Using high resolution camera calibration, passive auto focus algorithms and 2D object recognition, the position of the micro-gripper can be characterized in the 3D workspace and can be guided in future micro assembly tasks.

  11. Accurate quantification of width and density of bone structures by computed tomography

    SciTech Connect

    Hangartner, Thomas N.; Short, David F.

    2007-10-15

    In computed tomography (CT), the representation of edges between objects of different densities is influenced by the limited spatial resolution of the scanner. This results in the misrepresentation of density of narrow objects, leading to errors of up to 70% and more. Our interest is in the imaging and measurement of narrow bone structures, and the issues are the same for imaging with clinical CT scanners, peripheral quantitative CT scanners or micro CT scanners. Mathematical models, phantoms and tests with patient data led to the following procedures: (i) extract density profiles at one-degree increments from the CT images at right angles to the bone boundary; (ii) consider the outer and inner edge of each profile separately due to different adjacent soft tissues; (iii) measure the width of each profile based on a threshold at fixed percentage of the difference between the soft-tissue value and a first approximated bone value; (iv) correct the underlying material density of bone for each profile based on the measured width with the help of the density-versus-width curve obtained from computer simulations and phantom measurements. This latter curve is specific to a certain scanner and is not dependent on the densities of the tissues within the range seen in patients. This procedure allows the calculation of the material density of bone. Based on phantom measurements, we estimate the density error to be below 2% relative to the density of normal bone and the bone-width error about one tenth of a pixel size.

  12. A mechanistic approach for accurate simulation of village scale malaria transmission

    PubMed Central

    Bomblies, Arne; Duchemin, Jean-Bernard; Eltahir, Elfatih AB

    2009-01-01

    Background Malaria transmission models commonly incorporate spatial environmental and climate variability for making regional predictions of disease risk. However, a mismatch of these models' typical spatial resolutions and the characteristic scale of malaria vector population dynamics may confound disease risk predictions in areas of high spatial hydrological variability such as the Sahel region of Africa. Methods Field observations spanning two years from two Niger villages are compared. The two villages are separated by only 30 km but exhibit a ten-fold difference in anopheles mosquito density. These two villages would be covered by a single grid cell in many malaria models, yet their entomological activity differs greatly. Environmental conditions and associated entomological activity are simulated at high spatial- and temporal resolution using a mechanistic approach that couples a distributed hydrology scheme and an entomological model. Model results are compared to regular field observations of Anopheles gambiae sensu lato mosquito populations and local hydrology. The model resolves the formation and persistence of individual pools that facilitate mosquito breeding and predicts spatio-temporal mosquito population variability at high resolution using an agent-based modeling approach. Results Observations of soil moisture, pool size, and pool persistence are reproduced by the model. The resulting breeding of mosquitoes in the simulated pools yields time-integrated seasonal mosquito population dynamics that closely follow observations from captured mosquito abundance. Interannual difference in mosquito abundance is simulated, and the inter-village difference in mosquito population is reproduced for two years of observations. These modeling results emulate the known focal nature of malaria in Niger Sahel villages. Conclusion Hydrological variability must be represented at high spatial and temporal resolution to achieve accurate predictive ability of malaria risk

  13. A novel approach for accurate radiative transfer in cosmological hydrodynamic simulations

    NASA Astrophysics Data System (ADS)

    Petkova, Margarita; Springel, Volker

    2011-08-01

    We present a numerical implementation of radiative transfer based on an explicitly photon-conserving advection scheme, where radiative fluxes over the cell interfaces of a structured or unstructured mesh are calculated with a second-order reconstruction of the intensity field. The approach employs a direct discretization of the radiative transfer equation in Boltzmann form with adjustable angular resolution that, in principle, works equally well in the optically-thin and optically-thick regimes. In our most general formulation of the scheme, the local radiation field is decomposed into a linear sum of directional bins of equal solid angle, tessellating the unit sphere. Each of these 'cone fields' is transported independently, with constant intensity as a function of the direction within the cone. Photons propagate at the speed of light (or optionally using a reduced speed of light approximation to allow larger time-steps), yielding a fully time-dependent solution of the radiative transfer equation that can naturally cope with an arbitrary number of sources, as well as with scattering. The method casts sharp shadows, subject to the limitations induced by the adopted angular resolution. If the number of point sources is small and scattering is unimportant, our implementation can alternatively treat each source exactly in angular space, producing shadows whose sharpness is only limited by the grid resolution. A third hybrid alternative is to treat only a small number of the locally most luminous point sources explicitly, with the rest of the radiation intensity followed in a radiative diffusion approximation. We have implemented the method in the moving-mesh code AREPO, where it is coupled to the hydrodynamics in an operator-splitting approach that subcycles the radiative transfer alternately with the hydrodynamical evolution steps. We also discuss our treatment of basic photon sink processes relevant to cosmological reionization, with a chemical network that can

  14. The computational model to predict accurately inhibitory activity for inhibitors towards CYP3A4.

    PubMed

    Xie, Zhiyuan; Zhang, Tao; Wang, Jing-Fang; Chou, Kuo-Chen; Wei, Dong-Qing

    2010-01-01

    The cytochrome P450 (CYP) is a superfamily of enzymes with oxidative function responsible for the metabolism of xenobiotics especially drug metabolism. CYP3A4, an extensive studied CYP isoform, plays crucial role in the metabolism of structurally diverse drugs. Furthermore, the drug-drug interaction resulted from the inhibition of CYP3A4 activity is of extreme importance for the treatment of disease and the development of new drug. In this study, using the method of the support vector machine (SVM) and three descriptors selected from the 153 descriptors we construct the models that can predict accurately the inhibitory effect of a compound on the activity of CYP3A4. By optimizing the parameters related to SVM, the cross validation correlation efficient of the model can achieve 0.71, which is higher than those of other models obtained using Artifical Neutral Network (ANN) and Partial least square (PLS) methods to our knowledge, and thus our model can present the important application in assessment of the potential toxicity of a drug as well as prediction of drug-drug interactions. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. NEW APPROACHES: Using a computer to graphically illustrate equipotential lines

    NASA Astrophysics Data System (ADS)

    Phongdara, Boonlua

    1998-09-01

    A simple mathematical model and computer program allow students to plot equipotential lines, for example for two terminals in a tank of water, in a way that is easier and faster but just as accurate as the traditional method.

  16. A computational approach to negative priming

    NASA Astrophysics Data System (ADS)

    Schrobsdorff, H.; Ihrke, M.; Kabisch, B.; Behrendt, J.; Hasselhorn, M.; Herrmann, J. Michael

    2007-09-01

    Priming is characterized by a sensitivity of reaction times to the sequence of stimuli in psychophysical experiments. The reduction of the reaction time observed in positive priming is well-known and experimentally understood (Scarborough et al., J. Exp. Psycholol: Hum. Percept. Perform., 3, pp. 1-17, 1977). Negative priming—the opposite effect—is experimentally less tangible (Fox, Psychonom. Bull. Rev., 2, pp. 145-173, 1995). The dependence on subtle parameter changes (such as response-stimulus interval) usually varies. The sensitivity of the negative priming effect bears great potential for applications in research in fields such as memory, selective attention, and ageing effects. We develop and analyse a computational realization, CISAM, of a recent psychological model for action decision making, the ISAM (Kabisch, PhD thesis, Friedrich-Schiller-Universitat, 2003), which is sensitive to priming conditions. With the dynamical systems approach of the CISAM, we show that a single adaptive threshold mechanism is sufficient to explain both positive and negative priming effects. This is achieved by comparing results obtained by the computational modelling with experimental data from our laboratory. The implementation provides a rich base from which testable predictions can be derived, e.g. with respect to hitherto untested stimulus combinations (e.g. single-object trials).

  17. A Taylor series expansion for time savings in accurate computation of focused ultrasound pressure fields.

    PubMed

    Hall, T J; Madsen, E L; Zagzebski, J A

    1987-07-01

    A model for the continuous wave (cw) pressure beam distribution of a focused axially symmetric ultrasonic radiator with constant radius of curvature involves integration of eikr/r over the surface of the radiator. (k is the complex wave number and r is the distance from a radiating area element to the field point). A single integral form exists, and it is this form that is expanded in a Taylor series in frequency. Thus, when representing a pulse as a superposition of cw beams, the need to do numerical integrations for each one of a large number of frequencies is eliminated. Accuracy of the truncated Taylor series depends on the coordinates of the field point as well as on the difference between the frequency of interest and the reference frequency. Accuracy criteria for a particular application are also presented. The computer time savings for our applications correspond to a factor of about 60 with accuracy being maintained.

  18. Computer-implemented system and method for automated and highly accurate plaque analysis, reporting, and visualization

    NASA Technical Reports Server (NTRS)

    Kemp, James Herbert (Inventor); Talukder, Ashit (Inventor); Lambert, James (Inventor); Lam, Raymond (Inventor)

    2008-01-01

    A computer-implemented system and method of intra-oral analysis for measuring plaque removal is disclosed. The system includes hardware for real-time image acquisition and software to store the acquired images on a patient-by-patient basis. The system implements algorithms to segment teeth of interest from surrounding gum, and uses a real-time image-based morphing procedure to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The system integrates these components into a single software suite with an easy-to-use graphical user interface (GUI) that allows users to do an end-to-end run of a patient record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image.

  19. Efficiency and Accuracy of Time-Accurate Turbulent Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Sanetrik, Mark D.; Biedron, Robert T.; Melson, N. Duane; Parlette, Edward B.

    1995-01-01

    The accuracy and efficiency of two types of subiterations in both explicit and implicit Navier-Stokes codes are explored for unsteady laminar circular-cylinder flow and unsteady turbulent flow over an 18-percent-thick circular-arc (biconvex) airfoil. Grid and time-step studies are used to assess the numerical accuracy of the methods. Nonsubiterative time-stepping schemes and schemes with physical time subiterations are subject to time-step limitations in practice that are removed by pseudo time sub-iterations. Computations for the circular-arc airfoil indicate that a one-equation turbulence model predicts the unsteady separated flow better than an algebraic turbulence model; also, the hysteresis with Mach number of the self-excited unsteadiness due to shock and boundary-layer separation is well predicted.

  20. A model for the accurate computation of the lateral scattering of protons in water.

    PubMed

    Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T

    2016-02-21

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.

  1. A novel class of highly efficient and accurate time-integrators in nonlinear computational mechanics

    NASA Astrophysics Data System (ADS)

    Wang, Xuechuan; Atluri, Satya N.

    2017-01-01

    A new class of time-integrators is presented for strongly nonlinear dynamical systems. These algorithms are far superior to the currently common time integrators in computational efficiency and accuracy. These three algorithms are based on a local variational iteration method applied over a finite interval of time. By using Chebyshev polynomials as trial functions and Dirac-Delta functions as the test functions over the finite time interval, the three algorithms are developed into three different discrete time-integrators through the collocation method. These time integrators are labeled as Chebyshev local iterative collocation methods. Through examples of the forced Duffing oscillator, the Lorenz system, and the multiple coupled Duffing equations (which arise as semi-discrete equations for beams, plates and shells undergoing large deformations), it is shown that the new algorithms are far superior to the 4th order Runge-Kutta and ODE45 of MATLAB, in predicting the chaotic responses of strongly nonlinear dynamical systems.

  2. A novel class of highly efficient and accurate time-integrators in nonlinear computational mechanics

    NASA Astrophysics Data System (ADS)

    Wang, Xuechuan; Atluri, Satya N.

    2017-05-01

    A new class of time-integrators is presented for strongly nonlinear dynamical systems. These algorithms are far superior to the currently common time integrators in computational efficiency and accuracy. These three algorithms are based on a local variational iteration method applied over a finite interval of time. By using Chebyshev polynomials as trial functions and Dirac-Delta functions as the test functions over the finite time interval, the three algorithms are developed into three different discrete time-integrators through the collocation method. These time integrators are labeled as Chebyshev local iterative collocation methods. Through examples of the forced Duffing oscillator, the Lorenz system, and the multiple coupled Duffing equations (which arise as semi-discrete equations for beams, plates and shells undergoing large deformations), it is shown that the new algorithms are far superior to the 4th order Runge-Kutta and ODE45 of MATLAB, in predicting the chaotic responses of strongly nonlinear dynamical systems.

  3. A hybrid genetic algorithm-extreme learning machine approach for accurate significant wave height reconstruction

    NASA Astrophysics Data System (ADS)

    Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.

    2015-08-01

    Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).

  4. High-order accurate solution of the incompressible Navier-Stokes equations on massively parallel computers

    NASA Astrophysics Data System (ADS)

    Henniger, R.; Obrist, D.; Kleiser, L.

    2010-05-01

    The emergence of "petascale" supercomputers requires us to develop today's simulation codes for (incompressible) flows by codes which are using numerical schemes and methods that are better able to exploit the offered computational power. In that spirit, we present a massively parallel high-order Navier-Stokes solver for large incompressible flow problems in three dimensions. The governing equations are discretized with finite differences in space and a semi-implicit time integration scheme. This discretization leads to a large linear system of equations which is solved with a cascade of iterative solvers. The iterative solver for the pressure uses a highly efficient commutation-based preconditioner which is robust with respect to grid stretching. The efficiency of the implementation is further enhanced by carefully setting the (adaptive) termination criteria for the different iterative solvers. The computational work is distributed to different processing units by a geometric data decomposition in all three dimensions. This decomposition scheme ensures a low communication overhead and excellent scaling capabilities. The discretization is thoroughly validated. First, we verify the convergence orders of the spatial and temporal discretizations for a forced channel flow. Second, we analyze the iterative solution technique by investigating the absolute accuracy of the implementation with respect to the different termination criteria. Third, Orr-Sommerfeld and Squire eigenmodes for plane Poiseuille flow are simulated and compared to analytical results. Fourth, the practical applicability of the implementation is tested for transitional and turbulent channel flow. The results are compared to solutions from a pseudospectral solver. Subsequently, the performance of the commutation-based preconditioner for the pressure iteration is demonstrated. Finally, the excellent parallel scalability of the proposed method is demonstrated with a weak and a strong scaling test on up to

  5. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour

    PubMed Central

    Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  6. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    PubMed

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  7. Time-Accurate Computational Fluid Dynamics Simulation of a Pair of Moving Solid Rocket Boosters

    NASA Technical Reports Server (NTRS)

    Strutzenberg, Louise L.; Williams, Brandon R.

    2011-01-01

    Since the Columbia accident, the threat to the Shuttle launch vehicle from debris during the liftoff timeframe has been assessed by the Liftoff Debris Team at NASA/MSFC. In addition to engineering methods of analysis, CFD-generated flow fields during the liftoff timeframe have been used in conjunction with 3-DOF debris transport methods to predict the motion of liftoff debris. Early models made use of a quasi-steady flow field approximation with the vehicle positioned at a fixed location relative to the ground; however, a moving overset mesh capability has recently been developed for the Loci/CHEM CFD software which enables higher-fidelity simulation of the Shuttle transient plume startup and liftoff environment. The present work details the simulation of the launch pad and mobile launch platform (MLP) with truncated solid rocket boosters (SRBs) moving in a prescribed liftoff trajectory derived from Shuttle flight measurements. Using Loci/CHEM, time-accurate RANS and hybrid RANS/LES simulations were performed for the timeframe T0+0 to T0+3.5 seconds, which consists of SRB startup to a vehicle altitude of approximately 90 feet above the MLP. Analysis of the transient flowfield focuses on the evolution of the SRB plumes in the MLP plume holes and the flame trench, impingement on the flame deflector, and especially impingment on the MLP deck resulting in upward flow which is a transport mechanism for debris. The results show excellent qualitative agreement with the visual record from past Shuttle flights, and comparisons to pressure measurements in the flame trench and on the MLP provide confidence in these simulation capabilities.

  8. Time-Accurate Incompressible Navier-Stokes Computations with Overlapped Moving Grids

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan; Rogers, Stuart; Lee, Yu-Tai; Kutler, Paul (Technical Monitor)

    1994-01-01

    MIT flapping foil experiment was used as a validation case to evaluate the current incompressible Navier-Stokes approach with overlapped grid schemes. Steady-state calculations were carried out for overlapped and patched grids. The grid dependency, turbulence model effects, and the effect of order of differencing were investigated. Numerical results were compared against experimental data. The resulting procedure were applied to unsteady flapping foil calculations. Two upstream NACA 0025 foils perform high-frequency synchronized motion and generate unsteady flow conditions to the downstream larger stationary foil. Comparison between unsteady experimental data and numerical results from two different moving boundary procedures will be presented.

  9. Degree of rate control approach to computational catalyst screening

    SciTech Connect

    Wolcott, Christopher A.; Medford, Andrew J.; Studt, Felix; Campbell, Charles T.

    2015-10-01

    A new method for computational catalyst screening that is based on the concept of the degree of rate control (DRC) is introduced. It starts by developing a full mechanism and microkinetic model at the conditions of interest for a reference catalyst (ideally, the best known material) and then determines the degrees of rate control of the species in the mechanism (i.e., all adsorbed intermediates and transition states). It then uses the energies of the few species with the highest DRCs for this reference catalyst as descriptors to estimate the rates on related materials and predict which are most active. The predictions of this method regarding the relative rates of twelve late transition metals for methane steam reforming, using the Rh(2 1 1) surface as the reference catalyst, are compared to the most commonly-used approach for computation catalyst screening, the Nørskov–Bligaard (NB) method which uses linear scaling relationships to estimate the energies of all adsorbed intermediates and transition states. It is slightly more accurate than the NB approach when the metals are similar to the reference metal (<0.5 eV different on a plot where the axes are the bond energies to C and O adatoms), but worse when too different from the reference. It is computationally faster than the NB method when screening a moderate number of materials (<100), thus adding a valuable complement to the NB approach. It can be implemented without a microkinetic model if the degrees of rate control are already known approximately, e.g., from experiments.

  10. Computational approaches to predict bacteriophage-host relationships.

    PubMed

    Edwards, Robert A; McNair, Katelyn; Faust, Karoline; Raes, Jeroen; Dutilh, Bas E

    2016-03-01

    Metagenomics has changed the face of virus discovery by enabling the accurate identification of viral genome sequences without requiring isolation of the viruses. As a result, metagenomic virus discovery leaves the first and most fundamental question about any novel virus unanswered: What host does the virus infect? The diversity of the global virosphere and the volumes of data obtained in metagenomic sequencing projects demand computational tools for virus-host prediction. We focus on bacteriophages (phages, viruses that infect bacteria), the most abundant and diverse group of viruses found in environmental metagenomes. By analyzing 820 phages with annotated hosts, we review and assess the predictive power of in silico phage-host signals. Sequence homology approaches are the most effective at identifying known phage-host pairs. Compositional and abundance-based methods contain significant signal for phage-host classification, providing opportunities for analyzing the unknowns in viral metagenomes. Together, these computational approaches further our knowledge of the interactions between phages and their hosts. Importantly, we find that all reviewed signals significantly link phages to their hosts, illustrating how current knowledge and insights about the interaction mechanisms and ecology of coevolving phages and bacteria can be exploited to predict phage-host relationships, with potential relevance for medical and industrial applications.

  11. Accurate Computed Enthalpies of Spin Crossover in Iron and Cobalt Complexes

    NASA Astrophysics Data System (ADS)

    Jensen, Kasper P.; Cirera, Jordi

    2009-08-01

    Despite their importance in many chemical processes, the relative energies of spin states of transition metal complexes have so far been haunted by large computational errors. By the use of six functionals, B3LYP, BP86, TPSS, TPSSh, M06, and M06L, this work studies nine complexes (seven with iron and two with cobalt) for which experimental enthalpies of spin crossover are available. It is shown that such enthalpies can be used as quantitative benchmarks of a functional's ability to balance electron correlation in both the involved states. TPSSh achieves an unprecedented mean absolute error of ˜11 kJ/mol in spin transition energies, with the local functional M06L a distant second (25 kJ/mol). Other tested functionals give mean absolute errors of 40 kJ/mol or more. This work confirms earlier suggestions that 10% exact exchange is near-optimal for describing the electron correlation effects of first-row transition metal systems. Furthermore, it is shown that given an experimental structure of an iron complex, TPSSh can predict the electronic state corresponding to that experimental structure. We recommend this functional as current state-of-the-art for studying spin crossover and relative energies of close-lying electronic configurations in first-row transition metal systems.

  12. Highly Accurate Frequency Calculations of Crab Cavities Using the VORPAL Computational Framework

    SciTech Connect

    Austin, T.M.; Cary, J.R.; Bellantoni, L.; /Argonne

    2009-05-01

    We have applied the Werner-Cary method [J. Comp. Phys. 227, 5200-5214 (2008)] for extracting modes and mode frequencies from time-domain simulations of crab cavities, as are needed for the ILC and the beam delivery system of the LHC. This method for frequency extraction relies on a small number of simulations, and post-processing using the SVD algorithm with Tikhonov regularization. The time-domain simulations were carried out using the VORPAL computational framework, which is based on the eminently scalable finite-difference time-domain algorithm. A validation study was performed on an aluminum model of the 3.9 GHz RF separators built originally at Fermi National Accelerator Laboratory in the US. Comparisons with measurements of the A15 cavity show that this method can provide accuracy to within 0.01% of experimental results after accounting for manufacturing imperfections. To capture the near degeneracies two simulations, requiring in total a few hours on 600 processors were employed. This method has applications across many areas including obtaining MHD spectra from time-domain simulations.

  13. Is computed tomography an accurate and reliable method for measuring total knee arthroplasty component rotation?

    PubMed

    Figueroa, José; Guarachi, Juan Pablo; Matas, José; Arnander, Magnus; Orrego, Mario

    2016-04-01

    Computed tomography (CT) is widely used to assess component rotation in patients with poor results after total knee arthroplasty (TKA). The purpose of this study was to simultaneously determine the accuracy and reliability of CT in measuring TKA component rotation. TKA components were implanted in dry-bone models and assigned to two groups. The first group (n = 7) had variable femoral component rotations, and the second group (n = 6) had variable tibial tray rotations. CT images were then used to assess component rotation. Accuracy of CT rotational assessment was determined by mean difference, in degrees, between implanted component rotation and CT-measured rotation. Intraclass correlation coefficient (ICC) was applied to determine intra-observer and inter-observer reliability. Femoral component accuracy showed a mean difference of 2.5° and the tibial tray a mean difference of 3.2°. There was good intra- and inter-observer reliability for both components, with a femoral ICC of 0.8 and 0.76, and tibial ICC of 0.68 and 0.65, respectively. CT rotational assessment accuracy can differ from true component rotation by approximately 3° for each component. It does, however, have good inter- and intra-observer reliability.

  14. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    PubMed Central

    Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.

    2015-01-01

    Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational. PMID:25615870

  15. Improved modified energy ratio method using a multi-window approach for accurate arrival picking

    NASA Astrophysics Data System (ADS)

    Lee, Minho; Byun, Joongmoo; Kim, Dowan; Choi, Jihun; Kim, Myungsun

    2017-04-01

    To identify accurately the location of microseismic events generated during hydraulic fracture stimulation, it is necessary to detect the first break of the P- and S-wave arrival times recorded at multiple receivers. These microseismic data often contain high-amplitude noise, which makes it difficult to identify the P- and S-wave arrival times. The short-term-average to long-term-average (STA/LTA) and modified energy ratio (MER) methods are based on the differences in the energy densities of the noise and signal, and are widely used to identify the P-wave arrival times. The MER method yields more consistent results than the STA/LTA method for data with a low signal-to-noise (S/N) ratio. However, although the MER method shows good results regardless of the delay of the signal wavelet for signals with a high S/N ratio, it may yield poor results if the signal is contaminated by high-amplitude noise and does not have the minimum delay. Here we describe an improved MER (IMER) method, whereby we apply a multiple-windowing approach to overcome the limitations of the MER method. The IMER method contains calculations of an additional MER value using a third window (in addition to the original MER window), as well as the application of a moving average filter to each MER data point to eliminate high-frequency fluctuations in the original MER distributions. The resulting distribution makes it easier to apply thresholding. The proposed IMER method was applied to synthetic and real datasets with various S/N ratios and mixed-delay wavelets. The results show that the IMER method yields a high accuracy rate of around 80% within five sample errors for the synthetic datasets. Likewise, in the case of real datasets, 94.56% of the P-wave picking results obtained by the IMER method had a deviation of less than 0.5 ms (corresponding to 2 samples) from the manual picks.

  16. Towards an accurate and computationally-efficient modelling of Fe(II)-based spin crossover materials.

    PubMed

    Vela, Sergi; Fumanal, Maria; Ribas-Arino, Jordi; Robert, Vincent

    2015-07-07

    The DFT + U methodology is regarded as one of the most-promising strategies to treat the solid state of molecular materials, as it may provide good energetic accuracy at a moderate computational cost. However, a careful parametrization of the U-term is mandatory since the results may be dramatically affected by the selected value. Herein, we benchmarked the Hubbard-like U-term for seven Fe(ii)N6-based pseudo-octahedral spin crossover (SCO) compounds, using as a reference an estimation of the electronic enthalpy difference (ΔHelec) extracted from experimental data (T1/2, ΔS and ΔH). The parametrized U-value obtained for each of those seven compounds ranges from 2.37 eV to 2.97 eV, with an average value of U = 2.65 eV. Interestingly, we have found that this average value can be taken as a good starting point since it leads to an unprecedented mean absolute error (MAE) of only 4.3 kJ mol(-1) in the evaluation of ΔHelec for the studied compounds. Moreover, by comparing our results on the solid state and the gas phase of the materials, we quantify the influence of the intermolecular interactions on the relative stability of the HS and LS states, with an average effect of ca. 5 kJ mol(-1), whose sign cannot be generalized. Overall, the findings reported in this manuscript pave the way for future studies devoted to understand the crystalline phase of SCO compounds, or the adsorption of individual molecules on organic or metallic surfaces, in which the rational incorporation of the U-term within DFT + U yields the required energetic accuracy that is dramatically missing when using bare-DFT functionals.

  17. Preoperative misdiagnosis analysis and accurate distinguish intrathymic cyst from small thymoma on computed tomography

    PubMed Central

    Li, Xin; Han, Xingpeng; Sun, Wei; Wang, Meng; Jing, Guohui

    2016-01-01

    Background To evaluate the role of computed tomography (CT) in preoperative diagnosis of intrathymic cyst and small thymoma, and determine the best CT threshold for distinguish intrathymic cyst from small thymoma. Methods We retrospectively reviewed the medical records of 30 patients (17 intrathymic cyst and 13 small thymoma) who had undergone mediastinal masses resection (with diameter less than 3 cm) under thoracoscope between January 2014 and July 2015 at our hospital. Clinical and CT features were compared and receiver-operating characteristics curve (ROC) analysis was performed. Results The CT value of small thymoma [39.5 HU (IQR, 33.7–42.2 HU)] was significantly higher than intrathymic cyst [25.8 HU (IQR, 22.3–29.3 HU), P=0.004]. When CT value was 31.2 HU, it could act as a threshold for identification of small thymoma and intrathymic cyst (the sensitivity and specificity was 92.3% and 82.4%, respectively). The ΔCT value of enhanced CT value with the non-enhanced CT value was significantly different between small thymoma [18.7 HU (IQR, 10.9–19.0 HU)] and intrathymic cyst [4.3 HU (IQR, 3.0–11.7 HU), P=0.04]. The density was more homogenous in intrathymic cyst than small thymoma, and the contour of the intrathymic cyst was more smoothly than small thymoma. Conclusions Preoperative CT scans could help clinicians to identify intrathymic cyst and small thymoma, and we recommend 31.2 HU as the best thresholds. Contrast-enhanced CT scans is useful for further identification of the two diseases. PMID:27621863

  18. A computationally efficient and accurate numerical representation of thermodynamic properties of steam and water for computations of non-equilibrium condensing steam flow in steam turbines

    NASA Astrophysics Data System (ADS)

    Hrubý, Jan

    2012-04-01

    Mathematical modeling of the non-equilibrium condensing transonic steam flow in the complex 3D geometry of a steam turbine is a demanding problem both concerning the physical concepts and the required computational power. Available accurate formulations of steam properties IAPWS-95 and IAPWS-IF97 require much computation time. For this reason, the modelers often accept the unrealistic ideal-gas behavior. Here we present a computation scheme based on a piecewise, thermodynamically consistent representation of the IAPWS-95 formulation. Density and internal energy are chosen as independent variables to avoid variable transformations and iterations. On the contrary to the previous Tabular Taylor Series Expansion Method, the pressure and temperature are continuous functions of the independent variables, which is a desirable property for the solution of the differential equations of the mass, energy, and momentum conservation for both phases.

  19. Accurate treatments of electrostatics for computer simulations of biological systems: A brief survey of developments and existing problems

    NASA Astrophysics Data System (ADS)

    Yi, Sha-Sha; Pan, Cong; Hu, Zhong-Han

    2015-12-01

    Modern computer simulations of biological systems often involve an explicit treatment of the complex interactions among a large number of molecules. While it is straightforward to compute the short-ranged Van der Waals interaction in classical molecular dynamics simulations, it has been a long-lasting issue to develop accurate methods for the longranged Coulomb interaction. In this short review, we discuss three types of methodologies for the accurate treatment of electrostatics in simulations of explicit molecules: truncation-type methods, Ewald-type methods, and mean-field-type methods. Throughout the discussion, we brief the formulations and developments of these methods, emphasize the intrinsic connections among the three types of methods, and focus on the existing problems which are often associated with the boundary conditions of electrostatics. This brief survey is summarized with a short perspective on future trends along the method developments and applications in the field of biological simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 91127015 and 21522304) and the Open Project from the State Key Laboratory of Theoretical Physics, and the Innovation Project from the State Key Laboratory of Supramolecular Structure and Materials.

  20. Methods for Computing Accurate Atomic Spin Moments for Collinear and Noncollinear Magnetism in Periodic and Nonperiodic Materials.

    PubMed

    Manz, Thomas A; Sholl, David S

    2011-12-13

    The partitioning of electron spin density among atoms in a material gives atomic spin moments (ASMs), which are important for understanding magnetic properties. We compare ASMs computed using different population analysis methods and introduce a method for computing density derived electrostatic and chemical (DDEC) ASMs. Bader and DDEC ASMs can be computed for periodic and nonperiodic materials with either collinear or noncollinear magnetism, while natural population analysis (NPA) ASMs can be computed for nonperiodic materials with collinear magnetism. Our results show Bader, DDEC, and (where applicable) NPA methods give similar ASMs, but different net atomic charges. Because they are optimized to reproduce both the magnetic field and the chemical states of atoms in a material, DDEC ASMs are especially suitable for constructing interaction potentials for atomistic simulations. We describe the computation of accurate ASMs for (a) a variety of systems using collinear and noncollinear spin DFT, (b) highly correlated materials (e.g., magnetite) using DFT+U, and (c) various spin states of ozone using coupled cluster expansions. The computed ASMs are in good agreement with available experimental results for a variety of periodic and nonperiodic materials. Examples considered include the antiferromagnetic metal organic framework Cu3(BTC)2, several ozone spin states, mono- and binuclear transition metal complexes, ferri- and ferro-magnetic solids (e.g., Fe3O4, Fe3Si), and simple molecular systems. We briefly discuss the theory of exchange-correlation functionals for studying noncollinear magnetism. A method for finding the ground state of systems with highly noncollinear magnetism is introduced. We use these methods to study the spin-orbit coupling potential energy surface of the single molecule magnet Fe4C40H52N4O12, which has highly noncollinear magnetism, and find that it contains unusual features that give a new interpretation to experimental data.

  1. A hybrid solution using computational prediction and measured data to accurately determine process corrections with reduced overlay sampling

    NASA Astrophysics Data System (ADS)

    Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen

    2017-03-01

    Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.

  2. Computer Metaphors: Approaches to Computer Literacy for Educators.

    ERIC Educational Resources Information Center

    Peelle, Howard A.

    Because metaphors offer ready perspectives for comprehending something new, this document examines various metaphors educators might use to help students develop computer literacy. Metaphors described are the computer as person (a complex system worthy of respect), tool (perhaps the most powerful and versatile known to humankind), brain (both…

  3. A More Accurate and Efficient Technique Developed for Using Computational Methods to Obtain Helical Traveling-Wave Tube Interaction Impedance

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    1999-01-01

    The phenomenal growth of commercial communications has created a great demand for traveling-wave tube (TWT) amplifiers. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. For the first time, an accurate three-dimensional helical model was developed that allows accurate prediction of TWT cold-test characteristics including operating frequency, interaction impedance, and attenuation. This computational model, which was developed at the NASA Lewis Research Center, allows TWT designers to obtain a more accurate value of interaction impedance than is possible using experimental methods. Obtaining helical slow-wave circuit interaction impedance is an important part of the design process for a TWT because it is related to the gain and efficiency of the tube. This impedance cannot be measured directly; thus, conventional methods involve perturbing a helical circuit with a cylindrical dielectric rod placed on the central axis of the circuit and obtaining the difference in resonant frequency between the perturbed and unperturbed circuits. A mathematical relationship has been derived between this frequency difference and the interaction impedance (ref. 1). However, because of the complex configuration of the helical circuit, deriving this relationship involves several approximations. In addition, this experimental procedure is time-consuming and expensive, but until recently it was widely accepted as the most accurate means of determining interaction impedance. The advent of an accurate three-dimensional helical circuit model (ref. 2) made it possible for Lewis researchers to fully investigate standard approximations made in deriving the relationship between measured perturbation data and interaction impedance. The most prominent approximations made

  4. An accurate and computationally efficient algorithm for ground peak identification in large footprint waveform LiDAR data

    NASA Astrophysics Data System (ADS)

    Zhuang, Wei; Mountrakis, Giorgos

    2014-09-01

    Large footprint waveform LiDAR sensors have been widely used for numerous airborne studies. Ground peak identification in a large footprint waveform is a significant bottleneck in exploring full usage of the waveform datasets. In the current study, an accurate and computationally efficient algorithm was developed for ground peak identification, called Filtering and Clustering Algorithm (FICA). The method was evaluated on Land, Vegetation, and Ice Sensor (LVIS) waveform datasets acquired over Central NY. FICA incorporates a set of multi-scale second derivative filters and a k-means clustering algorithm in order to avoid detecting false ground peaks. FICA was tested in five different land cover types (deciduous trees, coniferous trees, shrub, grass and developed area) and showed more accurate results when compared to existing algorithms. More specifically, compared with Gaussian decomposition, the RMSE ground peak identification by FICA was 2.82 m (5.29 m for GD) in deciduous plots, 3.25 m (4.57 m for GD) in coniferous plots, 2.63 m (2.83 m for GD) in shrub plots, 0.82 m (0.93 m for GD) in grass plots, and 0.70 m (0.51 m for GD) in plots of developed areas. FICA performance was also relatively consistent under various slope and canopy coverage (CC) conditions. In addition, FICA showed better computational efficiency compared to existing methods. FICA's major computational and accuracy advantage is a result of the adopted multi-scale signal processing procedures that concentrate on local portions of the signal as opposed to the Gaussian decomposition that uses a curve-fitting strategy applied in the entire signal. The FICA algorithm is a good candidate for large-scale implementation on future space-borne waveform LiDAR sensors.

  5. Approaches to Classroom-Based Computational Science.

    ERIC Educational Resources Information Center

    Guzdial, Mark

    Computational science includes the use of computer-based modeling and simulation to define and test theories about scientific phenomena. The challenge for educators is to develop techniques for implementing computational science in the classroom. This paper reviews some previous work on the use of simulation alone (without modeling), modeling…

  6. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be

  7. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    SciTech Connect

    Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.

    2015-01-01

    The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  8. Third-Order Incremental Dual-Basis Set Zero-Buffer Approach: An Accurate and Efficient Way To Obtain CCSD and CCSD(T) Energies.

    PubMed

    Zhang, Jun; Dolg, Michael

    2013-07-09

    An efficient way to obtain accurate CCSD and CCSD(T) energies for large systems, i.e., the third-order incremental dual-basis set zero-buffer approach (inc3-db-B0), has been developed and tested. This approach combines the powerful incremental scheme with the dual-basis set method, and along with the new proposed K-means clustering (KM) method and zero-buffer (B0) approximation, can obtain very accurate absolute and relative energies efficiently. We tested the approach for 10 systems of different chemical nature, i.e., intermolecular interactions including hydrogen bonding, dispersion interaction, and halogen bonding; an intramolecular rearrangement reaction; aliphatic and conjugated hydrocarbon chains; three compact covalent molecules; and a water cluster. The results show that the errors for relative energies are <1.94 kJ/mol (or 0.46 kcal/mol), for absolute energies of <0.0026 hartree. By parallelization, our approach can be applied to molecules of more than 30 atoms and more than 100 correlated electrons with high-quality basis set such as cc-pVDZ or cc-pVTZ, saving computational cost by a factor of more than 10-20, compared to traditional implementation. The physical reasons of the success of the inc3-db-B0 approach are also analyzed.

  9. An impedimetric approach for accurate particle sizing using a microfluidic Coulter counter

    NASA Astrophysics Data System (ADS)

    Jagtiani, Ashish V.; Carletta, Joan; Zhe, Jiang

    2011-04-01

    In this paper, we present the design, impedimetric characterization and testing of a microfabricated Coulter counter for particle size measurement that uses a pair of thin film coplanar Au/Ti electrodes. An electrical equivalent circuit model of the designed device is analyzed. Accurate measurement of particle size was achieved by operating the device at a frequency for which the overall impedance is dominated by the channel resistance. A combination of design features, including the use of a pair of sensing electrodes with a surface area of 100 µm by 435 µm, a spacing of 1785 µm between the two sensing electrodes and a 350 µm long microchannel, ensures that this resistance dominates over a range of relatively low frequencies. The device was characterized for NaCl electrolyte solutions with different ionic concentrations ranging from 10-5 to 0.1 M. Results proved that the resistive behavior of the sensor occurs over a range of relatively low frequencies for all tested concentrations. The Coulter counter was then used to detect 30 µm polystyrene particles at a selected excitation frequency. Testing results demonstrated that the device can accurately measure particle sizes with small error. The design can be extended to ac Coulter counters with sub-micron sensing channels. Analysis of three designs of ac Coulter counters including sub-micron sensing channels using the electrical equivalent circuit model predicts that they can be operated at even lower frequencies, to accurately size nanoscale particles.

  10. Accurate computation of above threshold ionization spectra for stretched {{\\rm{H}}}_{2}^{+} in strong laser fields

    NASA Astrophysics Data System (ADS)

    Liang, Hao; Xiao, Xiang-Ru; Gong, Qihuang; Peng, Liang-You

    2017-09-01

    Investigations on the simplest benchmark system {{{H}}}2+ can reveal most underlying mechanisms for the intricate dynamics of molecular systems induced by strong laser pulses. However, due to the two-center Coulomb potential and the highly nonlinear nature of the electron dynamics, the accurate computation of the above threshold ionization spectra remains challenging, especially at large internuclear distances and high laser intensities. In the present work, we implement a new Gauss-quadrature approximation (GA) in the framework of finite element discrete variable representation to solve the time-dependent Schrödinger equation of {{{H}}}2+ in strong laser fields. By using this GA, one can arrive at a matrix representation of the first derivative operator that keeps its anti-hermiticity. This crucial feature allows a very stable propagation of the wavefunction under the usual Lanczos scheme. Combining with a wavefunction splitting in the asymptotic region, we show that our present numerical method can reliably deal with the electronic dynamics of stretched molecules at large internuclear distances for high laser intensities and long pulse durations. Accurate photoelectron momentum distributions under these conditions are presented and the distinct features due to the two-center potential are discussed.

  11. An accurate bimaxillary repositioning technique using straight locking miniplates for the mandible-first approach in bimaxillary orthognathic surgery.

    PubMed

    Iwai, Toshinori; Omura, Susumu; Honda, Koji; Yamashita, Yosuke; Shibutani, Naoki; Fujita, Koichi; Takasu, Hikaru; Murata, Shogo; Tohnai, Iwai

    2017-01-01

    Bimaxillary orthognathic surgery has been widely performed to achieve optimal functional and esthetic outcomes in patients with dentofacial deformity. Although Le Fort I osteotomy is generally performed before bilateral sagittal split osteotomy (BSSO) in the surgery, in several situations BSSO should be performed first. However, it is very difficult during bimaxillary orthognathic surgery to maintain an accurate centric relation of the condyle and decide the ideal vertical dimension from the skull base to the mandible. We have previously applied a straight locking miniplate (SLM) technique that permits accurate superior maxillary repositioning without the need for intraoperative measurements in bimaxillary orthognathic surgery. Here we describe the application of this technique for accurate bimaxillary repositioning in a mandible-first approach where the SLMs also serve as a condylar positioning device in bimaxillary orthognathic surgery.

  12. Fast and Accurate Electronic Excitations in Cyanines with the Many-Body Bethe-Salpeter Approach.

    PubMed

    Boulanger, Paul; Jacquemin, Denis; Duchemin, Ivan; Blase, Xavier

    2014-03-11

    The accurate prediction of the optical signatures of cyanine derivatives remains an important challenge in theoretical chemistry. Indeed, up to now, only the most expensive quantum chemical methods (CAS-PT2, CC, DMC, etc.) yield consistent and accurate data, impeding the applications on real-life molecules. Here, we investigate the lowest lying singlet excitation energies of increasingly long cyanine dyes within the GW and Bethe-Salpeter Green's function many-body perturbation theory. Our results are in remarkable agreement with available coupled-cluster (exCC3) data, bringing these two single-reference perturbation techniques within a 0.05 eV maximum discrepancy. By comparison, available TD-DFT calculations with various semilocal, global, or range-separated hybrid functionals, overshoot the transition energies by a typical error of 0.3-0.6 eV. The obtained accuracy is achieved with a parameter-free formalism that offers similar accuracy for metallic or insulating, finite size or extended systems.

  13. Accurate Vehicle Location System Using RFID, an Internet of Things Approach.

    PubMed

    Prinsloo, Jaco; Malekian, Reza

    2016-06-04

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved.

  14. Accurate Vehicle Location System Using RFID, an Internet of Things Approach

    PubMed Central

    Prinsloo, Jaco; Malekian, Reza

    2016-01-01

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved. PMID:27271638

  15. High-resolution MEG source imaging approach to accurately localize Broca's area in patients with brain tumor or epilepsy.

    PubMed

    Huang, Charles W; Huang, Ming-Xiong; Ji, Zhengwei; Swan, Ashley Robb; Angeles, Anne Marie; Song, Tao; Huang, Jeffrey W; Lee, Roland R

    2016-05-01

    Localizing expressive language function has been challenging using the conventional magnetoencephalography (MEG) source modeling methods. The present MEG study presents a new accurate and precise approach in localizing the language areas using a high-resolution MEG source imaging method. In 32 patients with brain tumors and/or epilepsies, an object-naming task was used to evoke MEG responses. Our Fast-VESTAL source imaging method was then applied to the MEG data in order to localize the brain areas evoked by the object-naming task. The Fast-VESTAL results showed that Broca's area was accurately localized to the pars opercularis (BA 44) and/or the pars triangularis (BA 45) in all patients. Fast-VESTAL also accurately localized Wernicke's area to the posterior aspect of the superior temporal gyri in BA 22, as well as several additional brain areas. Furthermore, we found that the latency of the main peak of the response in Wernicke's area was significantly earlier than that of Broca's area. In all patients, Fast-VESTAL analysis established accurate and precise localizations of Broca's area, as well as other language areas. The responses in Wernicke's area were also shown to significantly precede those of Broca's area. The present study demonstrates that using Fast-VESTAL, MEG can serve as an accurate and reliable functional imaging tool for presurgical mapping of language functions in patients with brain tumors and/or epilepsies. Published by Elsevier Ireland Ltd.

  16. Computational Approach to Structural Alerts: Furans, Phenols, Nitroaromatics, and Thiophenes.

    PubMed

    Dang, Na Le; Hughes, Tyler B; Miller, Grover P; Swamidass, S Joshua

    2017-04-17

    Structural alerts are commonly used in drug discovery to identify molecules likely to form reactive metabolites and thereby become toxic. Unfortunately, as useful as structural alerts are, they do not effectively model if, when, and why metabolism renders safe molecules toxic. Toxicity due to a specific structural alert is highly conditional, depending on the metabolism of the alert, the reactivity of its metabolites, dosage, and competing detoxification pathways. A systems approach, which explicitly models these pathways, could more effectively assess the toxicity risk of drug candidates. In this study, we demonstrated that mathematical models of P450 metabolism can predict the context-specific probability that a structural alert will be bioactivated in a given molecule. This study focuses on the furan, phenol, nitroaromatic, and thiophene alerts. Each of these structural alerts can produce reactive metabolites through certain metabolic pathways but not always. We tested whether our metabolism modeling approach, XenoSite, can predict when a given molecule's alerts will be bioactivated. Specifically, we used models of epoxidation, quinone formation, reduction, and sulfur-oxidation to predict the bioactivation of furan-, phenol-, nitroaromatic-, and thiophene-containing drugs. Our models separated bioactivated and not-bioactivated furan-, phenol-, nitroaromatic-, and thiophene-containing drugs with AUC performances of 100%, 73%, 93%, and 88%, respectively. Metabolism models accurately predict whether alerts are bioactivated and thus serve as a practical approach to improve the interpretability and usefulness of structural alerts. We expect that this same computational approach can be extended to most other structural alerts and later integrated into toxicity risk models. This advance is one necessary step toward our long-term goal of building comprehensive metabolic models of bioactivation and detoxification to guide assessment and design of new therapeutic

  17. Dynamical Approach Study of Spurious Numerics in Nonlinear Computations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Mansour, Nagi (Technical Monitor)

    2002-01-01

    The last two decades have been an era when computation is ahead of analysis and when very large scale practical computations are increasingly used in poorly understood multiscale complex nonlinear physical problems and non-traditional fields. Ensuring a higher level of confidence in the predictability and reliability (PAR) of these numerical simulations could play a major role in furthering the design, understanding, affordability and safety of our next generation air and space transportation systems, and systems for planetary and atmospheric sciences, and in understanding the evolution and origin of life. The need to guarantee PAR becomes acute when computations offer the ONLY way of solving these types of data limited problems. Employing theory from nonlinear dynamical systems, some building blocks to ensure a higher level of confidence in PAR of numerical simulations have been revealed by the author and world expert collaborators in relevant fields. Five building blocks with supporting numerical examples were discussed. The next step is to utilize knowledge gained by including nonlinear dynamics, bifurcation and chaos theories as an integral part of the numerical process. The third step is to design integrated criteria for reliable and accurate algorithms that cater to the different multiscale nonlinear physics. This includes but is not limited to the construction of appropriate adaptive spatial and temporal discretizations that are suitable for the underlying governing equations. In addition, a multiresolution wavelets approach for adaptive numerical dissipation/filter controls for high speed turbulence, acoustics and combustion simulations will be sought. These steps are corner stones for guarding against spurious numerical solutions that are solutions of the discretized counterparts but are not solutions of the underlying governing equations.

  18. Dynamical Approach Study of Spurious Numerics in Nonlinear Computations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Mansour, Nagi (Technical Monitor)

    2002-01-01

    The last two decades have been an era when computation is ahead of analysis and when very large scale practical computations are increasingly used in poorly understood multiscale complex nonlinear physical problems and non-traditional fields. Ensuring a higher level of confidence in the predictability and reliability (PAR) of these numerical simulations could play a major role in furthering the design, understanding, affordability and safety of our next generation air and space transportation systems, and systems for planetary and atmospheric sciences, and in understanding the evolution and origin of life. The need to guarantee PAR becomes acute when computations offer the ONLY way of solving these types of data limited problems. Employing theory from nonlinear dynamical systems, some building blocks to ensure a higher level of confidence in PAR of numerical simulations have been revealed by the author and world expert collaborators in relevant fields. Five building blocks with supporting numerical examples were discussed. The next step is to utilize knowledge gained by including nonlinear dynamics, bifurcation and chaos theories as an integral part of the numerical process. The third step is to design integrated criteria for reliable and accurate algorithms that cater to the different multiscale nonlinear physics. This includes but is not limited to the construction of appropriate adaptive spatial and temporal discretizations that are suitable for the underlying governing equations. In addition, a multiresolution wavelets approach for adaptive numerical dissipation/filter controls for high speed turbulence, acoustics and combustion simulations will be sought. These steps are corner stones for guarding against spurious numerical solutions that are solutions of the discretized counterparts but are not solutions of the underlying governing equations.

  19. Extension of the AMBER force field for nitroxide radicals and combined QM/MM/PCM approach to the accurate determination of EPR parameters of DMPOH in solution

    PubMed Central

    Hermosilla, Laura; Prampolini, Giacomo; Calle, Paloma; García de la Vega, José Manuel; Brancato, Giuseppe; Barone, Vincenzo

    2015-01-01

    A computational strategy that combines both time-dependent and time-independent approaches is exploited to accurately model molecular dynamics and solvent effects on the isotropic hyperfine coupling constants of the DMPO-H nitroxide. Our recent general force field for nitroxides derived from AMBER ff99SB is further extended to systems involving hydrogen atoms in β-positions with respect to NO. The resulting force-field has been employed in a series of classical molecular dynamics simulations, comparing the computed EPR parameters from selected molecular configurations to the corresponding experimental data in different solvents. The effect of vibrational averaging on the spectroscopic parameters is also taken into account, by second order vibrational perturbation theory involving semi-diagonal third energy derivatives together first and second property derivatives. PMID:26584116

  20. Identification of fidgety movements and prediction of CP by the use of computer-based video analysis is more accurate when based on two video recordings.

    PubMed

    Adde, Lars; Helbostad, Jorunn; Jensenius, Alexander R; Langaas, Mette; Støen, Ragnhild

    2013-08-01

    This study evaluates the role of postterm age at assessment and the use of one or two video recordings for the detection of fidgety movements (FMs) and prediction of cerebral palsy (CP) using computer vision software. Recordings between 9 and 17 weeks postterm age from 52 preterm and term infants (24 boys, 28 girls; 26 born preterm) were used. Recordings were analyzed using computer vision software. Movement variables, derived from differences between subsequent video frames, were used for quantitative analysis. Sensitivities, specificities, and area under curve were estimated for the first and second recording, or a mean of both. FMs were classified based on the Prechtl approach of general movement assessment. CP status was reported at 2 years. Nine children developed CP of whom all recordings had absent FMs. The mean variability of the centroid of motion (CSD) from two recordings was more accurate than using only one recording, and identified all children who were diagnosed with CP at 2 years. Age at assessment did not influence the detection of FMs or prediction of CP. The accuracy of computer vision techniques in identifying FMs and predicting CP based on two recordings should be confirmed in future studies.

  1. Ring polymer molecular dynamics fast computation of rate coefficients on accurate potential energy surfaces in local configuration space: Application to the abstraction of hydrogen from methane

    NASA Astrophysics Data System (ADS)

    Meng, Qingyong; Chen, Jun; Zhang, Dong H.

    2016-04-01

    To fast and accurately compute rate coefficients of the H/D + CH4 → H2/HD + CH3 reactions, we propose a segmented strategy for fitting suitable potential energy surface (PES), on which ring-polymer molecular dynamics (RPMD) simulations are performed. On the basis of recently developed permutation invariant polynomial neural-network approach [J. Li et al., J. Chem. Phys. 142, 204302 (2015)], PESs in local configuration spaces are constructed. In this strategy, global PES is divided into three parts, including asymptotic, intermediate, and interaction parts, along the reaction coordinate. Since less fitting parameters are involved in the local PESs, the computational efficiency for operating the PES routine is largely enhanced by a factor of ˜20, comparing with that for global PES. On interaction part, the RPMD computational time for the transmission coefficient can be further efficiently reduced by cutting off the redundant part of the child trajectories. For H + CH4, good agreements among the present RPMD rates and those from previous simulations as well as experimental results are found. For D + CH4, on the other hand, qualitative agreement between present RPMD and experimental results is predicted.

  2. Ring polymer molecular dynamics fast computation of rate coefficients on accurate potential energy surfaces in local configuration space: Application to the abstraction of hydrogen from methane.

    PubMed

    Meng, Qingyong; Chen, Jun; Zhang, Dong H

    2016-04-21

    To fast and accurately compute rate coefficients of the H/D + CH4 → H2/HD + CH3reactions, we propose a segmented strategy for fitting suitable potential energy surface (PES), on which ring-polymer molecular dynamics (RPMD) simulations are performed. On the basis of recently developed permutation invariant polynomial neural-network approach [J. Li et al., J. Chem. Phys. 142, 204302 (2015)], PESs in local configuration spaces are constructed. In this strategy, global PES is divided into three parts, including asymptotic, intermediate, and interaction parts, along the reaction coordinate. Since less fitting parameters are involved in the local PESs, the computational efficiency for operating the PES routine is largely enhanced by a factor of ∼20, comparing with that for global PES. On interaction part, the RPMD computational time for the transmission coefficient can be further efficiently reduced by cutting off the redundant part of the child trajectories. For H + CH4, good agreements among the present RPMD rates and those from previous simulations as well as experimental results are found. For D + CH4, on the other hand, qualitative agreement between present RPMD and experimental results is predicted.

  3. An accurate binding interaction model in de novo computational protein design of interactions: if you build it, they will bind.

    PubMed

    London, Nir; Ambroggio, Xavier

    2014-02-01

    Computational protein design efforts aim to create novel proteins and functions in an automated manner and, in the process, these efforts shed light on the factors shaping natural proteins. The focus of these efforts has progressed from the interior of proteins to their surface and the design of functions, such as binding or catalysis. Here we examine progress in the development of robust methods for the computational design of non-natural interactions between proteins and molecular targets such as other proteins or small molecules. This problem is referred to as the de novo computational design of interactions. Recent successful efforts in de novo enzyme design and the de novo design of protein-protein interactions open a path towards solving this problem. We examine the common themes in these efforts, and review recent studies aimed at understanding the nature of successes and failures in the de novo computational design of interactions. While several approaches culminated in success, the use of a well-defined structural model for a specific binding interaction in particular has emerged as a key strategy for a successful design, and is therefore reviewed with special consideration. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. A frequentist approach to computer model calibration

    SciTech Connect

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    2016-05-05

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates of convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.

  5. A frequentist approach to computer model calibration

    DOE PAGES

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    2016-05-05

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less

  6. Digital test signal generation: An accurate SNR calibration approach for the DSN

    NASA Technical Reports Server (NTRS)

    Gutierrez-Luaces, B. O.

    1991-01-01

    A new method of generating analog test signals with accurate signal to noise ratios (SNRs) is described. High accuracy will be obtained by simultaneous generation of digital noise and signal spectra at a given baseband or bandpass limited bandwidth. The digital synthesis will provide a test signal embedded in noise with the statistical properties of a stationary random process. Accuracy will only be dependent on test integration time with a limit imposed by the system quantization noise (expected to be 0.02 dB). Setability will be approximately 0.1 dB. The first digital SNR generator to provide baseband test signals is being built and will be available in early 1991.

  7. Simple and surprisingly accurate approach to the chemical bond obtained from dimensional scaling.

    PubMed

    Svidzinsky, Anatoly A; Scully, Marlan O; Herschbach, Dudley R

    2005-08-19

    We present a new dimensional scaling transformation of the Schrödinger equation for the two electron bond. This yields, for the first time, a good description of the bond via D scaling. There also emerges, in the large-D limit, an intuitively appealing semiclassical picture, akin to a molecular model proposed by Bohr in 1913. In this limit, the electrons are confined to specific orbits in the scaled space, yet the uncertainty principle is maintained. A first-order perturbation correction, proportional to 1/D, substantially improves the agreement with the exact ground state potential energy curve. The present treatment is very simple mathematically, yet provides a strikingly accurate description of the potential curves for the lowest singlet, triplet, and excited states of H2. We find the modified D-scaling method also gives good results for other molecules. It can be combined advantageously with Hartree-Fock and other conventional methods.

  8. Toward an Accurate Estimate of the Exfoliation Energy of Black Phosphorus: A Periodic Quantum Chemical Approach.

    PubMed

    Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti

    2016-01-07

    The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems.

  9. Computational dynamics for robotics systems using a non-strict computational approach

    NASA Technical Reports Server (NTRS)

    Orin, David E.; Wong, Ho-Cheung; Sadayappan, P.

    1989-01-01

    A Non-Strict computational approach for real-time robotics control computations is proposed. In contrast to the traditional approach to scheduling such computations, based strictly on task dependence relations, the proposed approach relaxes precedence constraints and scheduling is guided instead by the relative sensitivity of the outputs with respect to the various paths in the task graph. An example of the computation of the Inverse Dynamics of a simple inverted pendulum is used to demonstrate the reduction in effective computational latency through use of the Non-Strict approach. A speedup of 5 has been obtained when the processes of the task graph are scheduled to reduce the latency along the crucial path of the computation. While error is introduced by the relaxation of precedence constraints, the Non-Strict approach has a smaller error than the conventional Strict approach for a wide range of input conditions.

  10. Advances in Proteomics Data Analysis and Display Using an Accurate Mass and Time Tag Approach

    PubMed Central

    Zimmer, Jennifer S.D.; Monroe, Matthew E.; Qian, Wei-Jun; Smith, Richard D.

    2007-01-01

    Proteomics has recently demonstrated utility in understanding cellular processes on the molecular level as a component of systems biology approaches and for identifying potential biomarkers of various disease states. The large amount of data generated by utilizing high efficiency (e.g., chromatographic) separations coupled to high mass accuracy mass spectrometry for high-throughput proteomics analyses presents challenges related to data processing, analysis, and display. This review focuses on recent advances in nanoLC-FTICR-MS-based proteomics approaches and the accompanying data processing tools that have been developed to display and interpret the large volumes of data being produced. PMID:16429408

  11. An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS

    PubMed Central

    Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu

    2015-01-01

    With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller. PMID:26690154

  12. A stationary wavelet entropy-based clustering approach accurately predicts gene expression.

    PubMed

    Nguyen, Nha; Vo, An; Choi, Inchan; Won, Kyoung-Jae

    2015-03-01

    Studying epigenetic landscapes is important to understand the condition for gene regulation. Clustering is a useful approach to study epigenetic landscapes by grouping genes based on their epigenetic conditions. However, classical clustering approaches that often use a representative value of the signals in a fixed-sized window do not fully use the information written in the epigenetic landscapes. Clustering approaches to maximize the information of the epigenetic signals are necessary for better understanding gene regulatory environments. For effective clustering of multidimensional epigenetic signals, we developed a method called Dewer, which uses the entropy of stationary wavelet of epigenetic signals inside enriched regions for gene clustering. Interestingly, the gene expression levels were highly correlated with the entropy levels of epigenetic signals. Dewer separates genes better than a window-based approach in the assessment using gene expression and achieved a correlation coefficient above 0.9 without using any training procedure. Our results show that the changes of the epigenetic signals are useful to study gene regulation.

  13. An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS.

    PubMed

    Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu

    2015-12-04

    With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller.

  14. A morphometric approach for the accurate solvation thermodynamics of proteins and ligands.

    PubMed

    Harano, Yuichi; Roth, Roland; Chiba, Shuntaro

    2013-09-05

    We have developed a versatile method for calculating solvation thermodynamic quantities for molecules, starting from their atomic coordinates. The contribution of each atom to the thermodynamic quantities is estimated as a linear combination of four fundamental geometric measures of the atomic species, which are defined by Hadwiger's theorem, and the coefficients reflecting their solvation properties. This treatment enables us to calculate the solvation free energy with high accuracy despite of the limited computational load. The method can readily be applied to macromolecules in an all-atom molecular model, allowing the stability of these molecules' structures in solution to be evaluated.

  15. Data Anonymization that Leads to the Most Accurate Estimates of Statistical Characteristics: Fuzzy-Motivated Approach

    PubMed Central

    Xiang, G.; Ferson, S.; Ginzburg, L.; Longpré, L.; Mayorga, E.; Kosheleva, O.

    2013-01-01

    To preserve privacy, the original data points (with exact values) are replaced by boxes containing each (inaccessible) data point. This privacy-motivated uncertainty leads to uncertainty in the statistical characteristics computed based on this data. In a previous paper, we described how to minimize this uncertainty under the assumption that we use the same standard statistical estimates for the desired characteristics. In this paper, we show that we can further decrease the resulting uncertainty if we allow fuzzy-motivated weighted estimates, and we explain how to optimally select the corresponding weights. PMID:25187183

  16. An accurate and efficient experimental approach for characterization of the complex oral microbiota.

    PubMed

    Zheng, Wei; Tsompana, Maria; Ruscitto, Angela; Sharma, Ashu; Genco, Robert; Sun, Yijun; Buck, Michael J

    2015-10-05

    Currently, taxonomic interrogation of microbiota is based on amplification of 16S rRNA gene sequences in clinical and scientific settings. Accurate evaluation of the microbiota depends heavily on the primers used, and genus/species resolution bias can arise with amplification of non-representative genomic regions. The latest Illumina MiSeq sequencing chemistry has extended the read length to 300 bp, enabling deep profiling of large number of samples in a single paired-end reaction at a fraction of the cost. An increasingly large number of researchers have adopted this technology for various microbiome studies targeting the 16S rRNA V3-V4 hypervariable region. To expand the applicability of this powerful platform for further descriptive and functional microbiome studies, we standardized and tested an efficient, reliable, and straightforward workflow for the amplification, library construction, and sequencing of the 16S V1-V3 hypervariable region using the new 2 × 300 MiSeq platform. Our analysis involved 11 subgingival plaque samples from diabetic and non-diabetic human subjects suffering from periodontitis. The efficiency and reliability of our experimental protocol was compared to 16S V3-V4 sequencing data from the same samples. Comparisons were based on measures of observed taxonomic richness and species evenness, along with Procrustes analyses using beta(β)-diversity distance metrics. As an experimental control, we also analyzed a total of eight technical replicates for the V1-V3 and V3-V4 regions from a synthetic community with known bacterial species operon counts. We show that our experimental protocol accurately measures true bacterial community composition. Procrustes analyses based on unweighted UniFrac β-diversity metrics depicted significant correlation between oral bacterial composition for the V1-V3 and V3-V4 regions. However, measures of phylotype richness were higher for the V1-V3 region, suggesting that V1-V3 offers a deeper assessment of

  17. Computer-based Approaches to Patient Education

    PubMed Central

    Lewis, Deborah

    1999-01-01

    All articles indexed in MEDLINE or CINAHL, related to the use of computer technology in patient education, and published in peer-reviewed journals between 1971 and 1998 were selected for review. Sixty-six articles, including 21 research-based reports, were identified. Forty-five percent of the studies were related to the management of chronic disease. Thirteen studies described an improvement in knowledge scores or clinical outcomes when computer-based patient education was compared with traditional instruction. Additional articles examined patients' computer experience, socioeconomic status, race, and gender and found no significant differences when compared with program outcomes. Sixteen of the 21 research-based studies had effect sizes greater than 0.5, indicating a significant change in the described outcome when the study subjects participated in computer-based patient education. The findings from this review support computer-based education as an effective strategy for transfer of knowledge and skill development for patients. The limited number of research studies (N = 21) points to the need for additional research. Recommendations for new studies include cost-benefit analysis and the impact of these new technologies on health outcomes over time. PMID:10428001

  18. Accurate Simulation of Resonance-Raman Spectra of Flexible Molecules: An Internal Coordinates Approach.

    PubMed

    Baiardi, Alberto; Bloino, Julien; Barone, Vincenzo

    2015-07-14

    The interpretation and analysis of experimental resonance-Raman (RR) spectra can be significantly facilitated by vibronic computations based on reliable quantum-mechanical (QM) methods. With the aim of improving the description of large and flexible molecules, our recent time-dependent formulation to compute vibrationally resolved electronic spectra, based on Cartesian coordinates, has been extended to support internal coordinates. A set of nonredundant delocalized coordinates is automatically generated from the molecular connectivity thanks to a new general and robust procedure. In order to validate our implementation, a series of molecules has been used as test cases. Among them, rigid systems show that normal modes based on Cartesian and delocalized internal coordinates provide equivalent results, but the latter set is much more convenient and reliable for systems characterized by strong geometric deformations associated with the electronic transition. The so-called Z-matrix internal coordinates, which perform well for chain molecules, are also shown to be poorly suited in the presence of cycles or nonstandard structures.

  19. Sentinel nodes identified by computed tomography-lymphography accurately stage the axilla in patients with breast cancer

    PubMed Central

    2013-01-01

    Background Sentinel node biopsy often results in the identification and removal of multiple nodes as sentinel nodes, although most of these nodes could be non-sentinel nodes. This study investigated whether computed tomography-lymphography (CT-LG) can distinguish sentinel nodes from non-sentinel nodes and whether sentinel nodes identified by CT-LG can accurately stage the axilla in patients with breast cancer. Methods This study included 184 patients with breast cancer and clinically negative nodes. Contrast agent was injected interstitially. The location of sentinel nodes was marked on the skin surface using a CT laser light navigator system. Lymph nodes located just under the marks were first removed as sentinel nodes. Then, all dyed nodes or all hot nodes were removed. Results The mean number of sentinel nodes identified by CT-LG was significantly lower than that of dyed and/or hot nodes removed (1.1 vs 1.8, p <0.0001). Twenty-three (12.5%) patients had ≥2 sentinel nodes identified by CT-LG removed, whereas 94 (51.1%) of patients had ≥2 dyed and/or hot nodes removed (p <0.0001). Pathological evaluation demonstrated that 47 (25.5%) of 184 patients had metastasis to at least one node. All 47 patients demonstrated metastases to at least one of the sentinel nodes identified by CT-LG. Conclusions CT-LG can distinguish sentinel nodes from non-sentinel nodes, and sentinel nodes identified by CT-LG can accurately stage the axilla in patients with breast cancer. Successful identification of sentinel nodes using CT-LG may facilitate image-based diagnosis of metastasis, possibly leading to the omission of sentinel node biopsy. PMID:24321242

  20. Human brain mapping: Experimental and computational approaches

    SciTech Connect

    Wood, C.C.; George, J.S.; Schmidt, D.M.; Aine, C.J.; Sanders, J.; Belliveau, J.

    1998-11-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This program developed project combined Los Alamos' and collaborators' strengths in noninvasive brain imaging and high performance computing to develop potential contributions to the multi-agency Human Brain Project led by the National Institute of Mental Health. The experimental component of the project emphasized the optimization of spatial and temporal resolution of functional brain imaging by combining: (a) structural MRI measurements of brain anatomy; (b) functional MRI measurements of blood flow and oxygenation; and (c) MEG measurements of time-resolved neuronal population currents. The computational component of the project emphasized development of a high-resolution 3-D volumetric model of the brain based on anatomical MRI, in which structural and functional information from multiple imaging modalities can be integrated into a single computational framework for modeling, visualization, and database representation.

  1. An Approach to More Accurate Model Systems for Purple Acid Phosphatases (PAPs).

    PubMed

    Bernhardt, Paul V; Bosch, Simone; Comba, Peter; Gahan, Lawrence R; Hanson, Graeme R; Mereacre, Valeriu; Noble, Christopher J; Powell, Annie K; Schenk, Gerhard; Wadepohl, Hubert

    2015-08-03

    The active site of mammalian purple acid phosphatases (PAPs) have a dinuclear iron site in two accessible oxidation states (Fe(III)2 and Fe(III)Fe(II)), and the heterovalent is the active form, involved in the regulation of phosphate and phosphorylated metabolite levels in a wide range of organisms. Therefore, two sites with different coordination geometries to stabilize the heterovalent active form and, in addition, with hydrogen bond donors to enable the fixation of the substrate and release of the product, are believed to be required for catalytically competent model systems. Two ligands and their dinuclear iron complexes have been studied in detail. The solid-state structures and properties, studied by X-ray crystallography, magnetism, and Mössbauer spectroscopy, and the solution structural and electronic properties, investigated by mass spectrometry, electronic, nuclear magnetic resonance (NMR), electron paramagnetic resonance (EPR), and Mössbauer spectroscopies and electrochemistry, are discussed in detail in order to understand the structures and relative stabilities in solution. In particular, with one of the ligands, a heterovalent Fe(III)Fe(II) species has been produced by chemical oxidation of the Fe(II)2 precursor. The phosphatase reactivities of the complexes, in particular, also of the heterovalent complex, are reported. These studies include pH-dependent as well as substrate concentration dependent studies, leading to pH profiles, catalytic efficiencies and turnover numbers, and indicate that the heterovalent diiron complex discussed here is an accurate PAP model system.

  2. Accurate Waveforms for Non-spinning Binary Black Holes using the Effective-one-body Approach

    NASA Technical Reports Server (NTRS)

    Buonanno, Alessandra; Pan, Yi; Baker, John G.; Centrella, Joan; Kelly, Bernard J.; McWilliams, Sean T.; vanMeter, James R.

    2007-01-01

    Using numerical relativity as guidance and the natural flexibility of the effective-one-body (EOB) model, we extend the latter so that it can successfully match the numerical relativity waveforms of non-spinning binary black holes during the last stages of inspiral, merger and ringdown. Here, by successfully, we mean with phase differences < or approx. 8% of a gravitational-wave cycle accumulated until the end of the ringdown phase. We obtain this result by simply adding a 4 post-Newtonian order correction in the EOB radial potential and determining the (constant) coefficient by imposing high-matching performances with numerical waveforms of mass ratios m1/m2 = 1,2/3,1/2 and = 1/4, m1 and m2 being the individual black-hole masses. The final black-hole mass and spin predicted by the numerical simulations are used to determine the ringdown frequency and decay time of three quasi-normal-mode damped sinusoids that are attached to the EOB inspiral-(plunge) waveform at the light-ring. The accurate EOB waveforms may be employed for coherent searches of gravitational waves emitted by non-spinning coalescing binary black holes with ground-based laser-interferometer detectors.

  3. A new approach for highly accurate, remote temperature probing using magnetic nanoparticles

    PubMed Central

    Zhong, Jing; Liu, Wenzhong; Kong, Li; Morais, Paulo Cesar

    2014-01-01

    In this study, we report on a new approach for remote temperature probing that provides accuracy as good as 0.017°C (0.0055% accuracy) by measuring the magnetisation curve of magnetic nanoparticles. We included here the theoretical model construction and the inverse calculation method, and explored the impact caused by the temperature dependence of the saturation magnetisation and the applied magnetic field range. The reported results are of great significance in the establishment of safer protocols for the hyperthermia therapy and for the thermal assisted drug delivery technology. Likewise, our approach potentially impacts basic science as it provides a robust thermodynamic tool for noninvasive investigation of cell metabolism. PMID:25315470

  4. Accurate characterization of weak neutron fields by using a Bayesian approach.

    PubMed

    Medkour Ishak-Boushaki, G; Allab, M

    2017-04-01

    A Bayesian analysis of data derived from neutron spectrometric measurements provides the advantage of determining rigorously integral physical quantities characterizing the neutron field and their respective related uncertainties. The first and essential step in a Bayesian approach is the parameterization of the investigated neutron spectrum. The aim of this paper is to investigate the sensitivity of the Bayesian results, mainly the neutron dose H(*)(10) required for radiation protection purposes and its correlated uncertainty, to the selected neutron spectrum parameterization.

  5. Effective approach for accurately calculating individual energy of polar heterojunction interfaces

    NASA Astrophysics Data System (ADS)

    Akiyama, Toru; Nakane, Harunobu; Nakamura, Kohji; Ito, Tomonori

    2016-09-01

    We propose a direct approach for calculating individual energy of polar semiconductor interfaces using density functional theory calculations. This approach is applied to polar interfaces between group-III nitrides (AlN and GaN) and SiC and clarifies the interplay of chemical bonding and charge neutrality at the interface, which is crucial for the stability and polarity of group-III nitrides on SiC substrates. The ideal interface is stabilized among various atomic arrangements over the wide range of the chemical potential on Si-face SiC, whereas those with intermixing are favorable on C-face SiC. The stabilization of the ideal interfaces resulting in Ga-polar GaN and Al-polar AlN films on Si-face SiC is consistent with experiments, suggesting that our approach is versatile to evaluate various polar heterojunction interfaces as well as group-III nitrides on semiconductor substrates.

  6. Accurate ab initio potential energy computations for the H sub 4 system: Tests of some analytic potential energy surfaces

    SciTech Connect

    Boothroyd, A.I. ); Dove, J.E.; Keogh, W.J. ); Martin, P.G. ); Peterson, M.R. )

    1991-09-15

    The interaction potential energy surface (PES) of H{sub 4} is of great importance for quantum chemistry, as a test case for molecule--molecule interactions. It is also required for a detailed understanding of certain astrophysical processes, namely, collisional excitation and dissociation of H{sub 2} in molecular clouds, at densities too low to be accessible experimentally. Accurate {ital ab} {ital initio} energies were computed for 6046 conformations of H{sub 4}, using a multiple reference (single and) double excitation configuration interaction (MRD-CI) program. Both systematic and random'' errors were estimated to have an rms size of 0.6 mhartree, for a total rms error of about 0.9 mhartree (or 0.55 kcal/mol) in the final {ital ab} {ital initio} energy values. It proved possible to include in a self-consistent way {ital ab} {ital initio} energies calculated by Schwenke, bringing the number of H{sub 4} conformations to 6101. {ital Ab} {ital initio} energies were also computed for 404 conformations of H{sub 3}; adding {ital ab} {ital initio} energies calculated by other authors yielded a total of 772 conformations of H{sub 3}. (The H{sub 3} results, and an improved analytic PES for H{sub 3}, are reported elsewhere.) {ital Ab} {ital initio} energies are tabulated in this paper only for a sample of H{sub 4} conformations; a full list of all 6101 conformations of H{sub 4} (and 772 conformations of H{sub 3} ) is available from Physics Auxiliary Publication Service (PAPS), or from the authors.

  7. Accurately computing the optical pathlength difference for a michelson interferometer with minimal knowledge of the source spectrum.

    PubMed

    Milman, Mark H

    2005-12-01

    Astrometric measurements using stellar interferometry rely on precise measurement of the central white light fringe to accurately obtain the optical pathlength difference of incoming starlight to the two arms of the interferometer. One standard approach to stellar interferometry uses a channeled spectrum to determine phases at a number of different wavelengths that are then converted to the pathlength delay. When throughput is low these channels are broadened to improve the signal-to-noise ratio. Ultimately the ability to use monochromatic models and algorithms in each of the channels to extract phase becomes problematic and knowledge of the spectrum must be incorporated to achieve the accuracies required of the astrometric measurements. To accomplish this an optimization problem is posed to estimate simultaneously the pathlength delay and spectrum of the source. Moreover, the nature of the parameterization of the spectrum that is introduced circumvents the need to solve directly for these parameters so that the optimization problem reduces to a scalar problem in just the pathlength delay variable. A number of examples are given to show the robustness of the approach.

  8. Accurate and Efficient Regularized Inversion Approach for the Interpretation of Isolated Gravity Anomalies

    NASA Astrophysics Data System (ADS)

    Mehanee, Salah A.

    2014-08-01

    A very fast and efficient approach for gravity data inversion based on the regularized conjugate gradient method has been developed. This approach simultaneously inverts for the depth ( z), and the amplitude coefficient ( A) of a buried anomalous body from the gravity data measured along a profile. The developed algorithm fits the observed data by a class of some geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, infinitely long horizontal cylinder, and sphere models using the logarithms of the model parameters [log( z) and log(| A|)] rather than the parameters themselves in its iterative minimization scheme. The presented numerical experiments have shown that the original (non-logarithmed) minimization scheme, which uses the parameters themselves ( z and | A|) instead of their logarithms, encountered a variety of convergence problems. The aforementioned transformation of the objective functional subjected to minimization into the space of logarithms of z and | A| overcomes these convergence problems. The reliability and the applicability of the developed algorithm have been demonstrated on several synthetic data sets with and without noise. It is then successfully and carefully applied to seven real data examples with bodies buried in different complex geologic settings and at various depths inside the earth. The method is shown to be highly applicable for mineral exploration, and for both shallow and deep earth imaging, and is of particular value in cases where the observed gravity data is due to an isolated body embedded in the subsurface.

  9. Implementing an Accurate and Rapid Sparse Sampling Approach for Low-Dose Atomic Resolution STEM Imaging

    SciTech Connect

    Kovarik, Libor; Stevens, Andrew J.; Liyu, Andrey V.; Browning, Nigel D.

    2016-10-17

    Aberration correction for scanning transmission electron microscopes (STEM) has dramatically increased spatial image resolution for beam-stable materials, but it is the sample stability rather than the microscope that often limits the practical resolution of STEM images. To extract physical information from images of beam sensitive materials it is becoming clear that there is a critical dose/dose-rate below which the images can be interpreted as representative of the pristine material, while above it the observation is dominated by beam effects. Here we describe an experimental approach for sparse sampling in the STEM and in-painting image reconstruction in order to reduce the electron dose/dose-rate to the sample during imaging. By characterizing the induction limited rise-time and hysteresis in scan coils, we show that sparse line-hopping approach to scan randomization can be implemented that optimizes both the speed of the scan and the amount of the sample that needs to be illuminated by the beam. The dose and acquisition time for the sparse sampling is shown to be effectively decreased by factor of 5x relative to conventional acquisition, permitting imaging of beam sensitive materials to be obtained without changing the microscope operating parameters. The use of sparse line-hopping scan to acquire STEM images is demonstrated with atomic resolution aberration corrected Z-contrast images of CaCO3, a material that is traditionally difficult to image by TEM/STEM because of dose issues.

  10. Machine learning and synthetic aperture refocusing approach for more accurate masking of fish bodies in 3D PIV data

    NASA Astrophysics Data System (ADS)

    Ford, Logan; Bajpayee, Abhishek; Techet, Alexandra

    2015-11-01

    3D particle image velocimetry (PIV) is becoming a popular technique to study biological flows. PIV images that contain fish or other animals around which flow is being studied, need to be appropriately masked in order to remove the animal body from the 3D reconstructed volumes prior to calculating particle displacement vectors. Presented here is a machine learning and synthetic aperture (SA) refocusing based approach for more accurate masking of fish from reconstructed intensity fields for 3D PIV purposes. Using prior knowledge about the 3D shape and appearance of the fish along with SA refocused images at arbitrarily oriented focal planes, the location and orientation of a fish in a reconstructed volume can be accurately determined. Once the location and orientation of a fish in a volume is determined, it can be masked out.

  11. Analysis and accurate reconstruction of incomplete data in X-ray differential phase-contrast computed tomography.

    PubMed

    Fu, Jian; Tan, Renbo; Chen, Liyuan

    2014-01-01

    X-ray differential phase-contrast computed tomography (DPC-CT) is a powerful physical and biochemical analysis tool. In practical applications, there are often challenges for DPC-CT due to insufficient data caused by few-view, bad or missing detector channels, or limited scanning angular range. They occur quite frequently because of experimental constraints from imaging hardware, scanning geometry, and the exposure dose delivered to living specimens. In this work, we analyze the influence of incomplete data on DPC-CT image reconstruction. Then, a reconstruction method is developed and investigated for incomplete data DPC-CT. It is based on an algebraic iteration reconstruction technique, which minimizes the image total variation and permits accurate tomographic imaging with less data. This work comprises a numerical study of the method and its experimental verification using a dataset measured at the W2 beamline of the storage ring DORIS III equipped with a Talbot-Lau interferometer. The numerical and experimental results demonstrate that the presented method can handle incomplete data. It will be of interest for a wide range of DPC-CT applications in medicine, biology, and nondestructive testing.

  12. Interpolation Approach To Computer-Generated Holograms

    NASA Astrophysics Data System (ADS)

    Yatagai, Toyohiko

    1983-10-01

    A computer-generated hologram (CGH) for reconstructing independent NxN resolution points would actually require a hologram made up of NxN sampling cells. For dependent sampling points of Fourier transform CGHs, the required memory size for computation by using an interpolation technique for reconstructed image points can be reduced. We have made a mosaic hologram which consists of K x K subholograms with N x N sampling points multiplied by an appropriate weighting factor. It is shown that the mosaic hologram can reconstruct an image with NK x NK resolution points. The main advantage of the present algorithm is that a sufficiently large size hologram of NK x NK sample points is synthesized by K x K subholograms which are successively calculated from the data of N x N sample points and also successively plotted.

  13. A declarative approach to visualizing concurrent computations

    SciTech Connect

    Roman, G.C.; Cox, K.C. )

    1989-10-01

    That visualization can play a key role in the exploration of concurrent computations is central to the ideas presented. Equally important, although given less emphasis, is concern that the full potential of visualization may not be reached unless the art of generating beautiful pictures is rooted in a solid, formally technical foundation. The authors show that program verification provides a formal framework around which such a foundation can be built. Making these ideas a practical reality will require both research and experimentation.

  14. Computational Approaches to Drug Repurposing and Pharmacology

    PubMed Central

    Hodos, Rachel A; Kidd, Brian A; Khader, Shameer; Readhead, Ben P; Dudley, Joel T

    2016-01-01

    Data in the biological, chemical, and clinical domains are accumulating at ever-increasing rates and have the potential to accelerate and inform drug development in new ways. Challenges and opportunities now lie in developing analytic tools to transform these often complex and heterogeneous data into testable hypotheses and actionable insights. This is the aim of computational pharmacology, which uses in silico techniques to better understand and predict how drugs affect biological systems, which can in turn improve clinical use, avoid unwanted side effects, and guide selection and development of better treatments. One exciting application of computational pharmacology is drug repurposing- finding new uses for existing drugs. Already yielding many promising candidates, this strategy has the potential to improve the efficiency of the drug development process and reach patient populations with previously unmet needs such as those with rare diseases. While current techniques in computational pharmacology and drug repurposing often focus on just a single data modality such as gene expression or drug-target interactions, we rationalize that methods such as matrix factorization that can integrate data within and across diverse data types have the potential to improve predictive performance and provide a fuller picture of a drug's pharmacological action. PMID:27080087

  15. A Social Construction Approach to Computer Science Education

    ERIC Educational Resources Information Center

    Machanick, Philip

    2007-01-01

    Computer science education research has mostly focused on cognitive approaches to learning. Cognitive approaches to understanding learning do not account for all the phenomena observed in teaching and learning. A number of apparently successful educational approaches, such as peer assessment, apprentice-based learning and action learning, have…

  16. Inverse-problem approach for particle digital holography: accurate location based on local optimization.

    PubMed

    Soulez, Ferréol; Denis, Loïc; Fournier, Corinne; Thiébaut, Eric; Goepfert, Charles

    2007-04-01

    We propose a microparticle localization scheme in digital holography. Most conventional digital holography methods are based on Fresnel transform and present several problems such as twin-image noise, border effects, and other effects. To avoid these difficulties, we propose an inverse-problem approach, which yields the optimal particle set that best models the observed hologram image. We resolve this global optimization problem by conventional particle detection followed by a local refinement for each particle. Results for both simulated and real digital holograms show strong improvement in the localization of the particles, particularly along the depth dimension. In our simulations, the position precision is > or =1 microm rms. Our results also show that the localization precision does not deteriorate for particles near the edge of the field of view.

  17. An efficient and accurate technique to compute the absorption, emission, and transmission of radiation by the Martian atmosphere

    NASA Technical Reports Server (NTRS)

    Lindner, Bernhard Lee; Ackerman, Thomas P.; Pollack, James B.

    1990-01-01

    CO2 comprises 95 pct. of the composition of the Martian atmosphere. However, the Martian atmosphere also has a high aerosol content. Dust particles vary from less than 0.2 to greater than 3.0. CO2 is an active absorber and emitter in near IR and IR wavelengths; the near IR absorption bands of CO2 provide significant heating of the atmosphere, and the 15 micron band provides rapid cooling. Including both CO2 and aerosol radiative transfer simultaneously in a model is difficult. Aerosol radiative transfer requires a multiple scattering code, while CO2 radiative transfer must deal with complex wavelength structure. As an alternative to the pure atmosphere treatment in most models which causes inaccuracies, a treatment was developed called the exponential sum or k distribution approximation. The chief advantage of the exponential sum approach is that the integration over k space of f(k) can be computed more quickly than the integration of k sub upsilon over frequency. The exponential sum approach is superior to the photon path distribution and emissivity techniques for dusty conditions. This study was the first application of the exponential sum approach to Martian conditions.

  18. A simple, efficient, and high-order accurate curved sliding-mesh interface approach to spectral difference method on coupled rotating and stationary domains

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Liang, Chunlei

    2015-08-01

    This paper presents a simple, efficient, and high-order accurate sliding-mesh interface approach to the spectral difference (SD) method. We demonstrate the approach by solving the two-dimensional compressible Navier-Stokes equations on quadrilateral grids. This approach is an extension of the straight mortar method originally designed for stationary domains [7,8]. Our sliding method creates curved dynamic mortars on sliding-mesh interfaces to couple rotating and stationary domains. On the nonconforming sliding-mesh interfaces, the related variables are first projected from cell faces to mortars to compute common fluxes, and then the common fluxes are projected back from the mortars to the cell faces to ensure conservation. To verify the spatial order of accuracy of the sliding-mesh spectral difference (SSD) method, both inviscid and viscous flow cases are tested. It is shown that the SSD method preserves the high-order accuracy of the SD method. Meanwhile, the SSD method is found to be very efficient in terms of computational cost. This novel sliding-mesh interface method is very suitable for parallel processing with domain decomposition. It can be applied to a wide range of problems, such as the hydrodynamics of marine propellers, the aerodynamics of rotorcraft, wind turbines, and oscillating wing power generators, etc.

  19. Machine learning and computer vision approaches for phenotypic profiling.

    PubMed

    Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J

    2017-01-02

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.

  20. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    NASA Technical Reports Server (NTRS)

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  1. Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data.

    PubMed

    Pagán, Josué; De Orbe, M Irene; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L; Mora, J Vivancos; Moya, José M; Ayala, José L

    2015-06-30

    Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives.

  2. Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data

    PubMed Central

    Pagán, Josué; Irene De Orbe, M.; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L.; Vivancos Mora, J.; Moya, José M.; Ayala, José L.

    2015-01-01

    Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103

  3. A binding free energy decomposition approach for accurate calculations of the fidelity of DNA polymerases

    PubMed Central

    Rucker, Robert; Oelschlaeger, Peter; Warshel, Arieh

    2010-01-01

    DNA polymerase β (pol β) is a small eukaryotic enzyme with the ability to repair short single-stranded DNA gaps that has found use as a model system for larger replicative DNA polymerases. For all DNA polymerases, the factors determining their catalytic power and fidelity are the interactions between the bases of the base pair, amino acids near the active site, and the two magnesium ions. In this report, we study effects of all three aspects on human pol β transition state (TS) binding free energies by reproducing a consistent set of experimentally determined data for different structures. Our calculations comprise the combination of four different base pairs (incoming pyrimidine nucleotides incorporated opposite both matched and mismatched purines) with four different pol β structures (wild type and three separate mutations of ionized residues to alanine). We decompose the incoming deoxynucleoside 5′-triphosphate-TS, and run separate calculations for the neutral base part and the highly charged triphosphate part, using different dielectric constants in order to account for the specific electrostatic environments. This new approach improves our ability to predict the effect of matched and mismatched base pairing and of mutations in DNA polymerases on fidelity and may be a useful tool in studying the potential of DNA polymerase mutations in the development of cancer. It also supports our point of view with regards to the origin of the structural control of fidelity, allowing for a quantified description of the fidelity of DNA polymerases. PMID:19842163

  4. Accurate evaporation rates of pure and doped water clusters in vacuum: A statistico-dynamical approach.

    PubMed

    Calvo, F; Douady, J; Spiegelman, F

    2010-01-14

    Unimolecular evaporation of selected pure (H(2)O)(n) and heterogeneous (H(2)O)(n-1)X(+) water clusters containing a single hydronium or ammonium impurity is investigated in the framework of phase space theory (PST) in its orbiting transition state version. Using the many-body polarizable Kozack-Jordan potential and its extensions for X(+)=H(3)O(+) and NH(4) (+), the thermal evaporation of clusters containing 21 and 50 molecules is simulated at several total energies. Numerous molecular dynamics (MD) trajectories at high internal energies provide estimates of the decay rate constant, as well as the kinetic energy and angular momentum released upon dissociation. Additional Monte Carlo simulations are carried out to determine the anharmonic densities of vibrational states, which combined with suitable forms for the rotational densities of states provide expressions for the energy-resolved differential rates. Successful comparison between the MD results and the independent predictions of PST for the distributions of kinetic energy and angular momentum released shows that the latter statistical approach is quantitative. Using MD data as a reference, the absolute evaporation rates are calculated from PST over broad energy and temperature ranges. Based on these results, the presence of an ionic impurity is generally found to decrease the rate, however the effect is much more significant in the 21-molecule clusters. Our calculations also suggest that due to backbendings in the microcanonical densities of states the variations of the evaporation rates may not be strictly increasing with energy or temperature.

  5. EFICAz: a comprehensive approach for accurate genome-scale enzyme function inference

    PubMed Central

    Tian, Weidong; Arakaki, Adrian K.; Skolnick, Jeffrey

    2004-01-01

    EFICAz (Enzyme Function Inference by Combined Approach) is an automatic engine for large-scale enzyme function inference that combines predictions from four different methods developed and optimized to achieve high prediction accuracy: (i) recognition of functionally discriminating residues (FDRs) in enzyme families obtained by a Conservation-controlled HMM Iterative procedure for Enzyme Family classification (CHIEFc), (ii) pairwise sequence comparison using a family specific Sequence Identity Threshold, (iii) recognition of FDRs in Multiple Pfam enzyme families, and (iv) recognition of multiple Prosite patterns of high specificity. For FDR (i.e. conserved positions in an enzyme family that discriminate between true and false members of the family) identification, we have developed an Evolutionary Footprinting method that uses evolutionary information from homofunctional and heterofunctional multiple sequence alignments associated with an enzyme family. The FDRs show a significant correlation with annotated active site residues. In a jackknife test, EFICAz shows high accuracy (92%) and sensitivity (82%) for predicting four EC digits in testing sequences that are <40% identical to any member of the corresponding training set. Applied to Escherichia coli genome, EFICAz assigns more detailed enzymatic function than KEGG, and generates numerous novel predictions. PMID:15576349

  6. EFICAz: a comprehensive approach for accurate genome-scale enzyme function inference.

    PubMed

    Tian, Weidong; Arakaki, Adrian K; Skolnick, Jeffrey

    2004-01-01

    EFICAz (Enzyme Function Inference by Combined Approach) is an automatic engine for large-scale enzyme function inference that combines predictions from four different methods developed and optimized to achieve high prediction accuracy: (i) recognition of functionally discriminating residues (FDRs) in enzyme families obtained by a Conservation-controlled HMM Iterative procedure for Enzyme Family classification (CHIEFc), (ii) pairwise sequence comparison using a family specific Sequence Identity Threshold, (iii) recognition of FDRs in Multiple Pfam enzyme families, and (iv) recognition of multiple Prosite patterns of high specificity. For FDR (i.e. conserved positions in an enzyme family that discriminate between true and false members of the family) identification, we have developed an Evolutionary Footprinting method that uses evolutionary information from homofunctional and heterofunctional multiple sequence alignments associated with an enzyme family. The FDRs show a significant correlation with annotated active site residues. In a jackknife test, EFICAz shows high accuracy (92%) and sensitivity (82%) for predicting four EC digits in testing sequences that are <40% identical to any member of the corresponding training set. Applied to Escherichia coli genome, EFICAz assigns more detailed enzymatic function than KEGG, and generates numerous novel predictions.

  7. Information theoretic approaches to multidimensional neural computations

    NASA Astrophysics Data System (ADS)

    Fitzgerald, Jeffrey D.

    Many systems in nature process information by transforming inputs from their environments into observable output states. These systems are often difficult to study because they are performing computations on multidimensional inputs with many degrees of freedom using highly nonlinear functions. The work presented in this dissertation deals with some of the issues involved with characterizing real-world input/output systems and understanding the properties of idealized systems using information theoretic methods. Using the principle of maximum entropy, a family of models are created that are consistent with certain measurable correlations from an input/output dataset but are maximally unbiased in all other respects, thereby eliminating all unjustified assumptions about the computation. In certain cases, including spiking neurons, we show that these models also minimize the mutual information. This property gives one the advantage of being able to identify the relevant input/output statistics by calculating their information content. We argue that these maximum entropy models provide a much needed quantitative framework for characterizing and understanding sensory processing neurons that are selective for multiple stimulus features. To demonstrate their usefulness, these ideas are applied to neural recordings from macaque retina and thalamus. These neurons, which primarily respond to two stimulus features, are shown to be well described using only first and second order statistics, indicating that their firing rates encode information about stimulus correlations. In addition to modeling multi-feature computations in the relevant feature space, we also show that maximum entropy models are capable of discovering the relevant feature space themselves. This technique overcomes the disadvantages of two commonly used dimensionality reduction methods and is explored using several simulated neurons, as well as retinal and thalamic recordings. Finally, we ask how neurons in a

  8. Acoustic gravity waves: A computational approach

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.; Dutt, P. K.

    1987-01-01

    This paper discusses numerical solutions of a hyperbolic initial boundary value problem that arises from acoustic wave propagation in the atmosphere. Field equations are derived from the atmospheric fluid flow governed by the Euler equations. The resulting original problem is nonlinear. A first order linearized version of the problem is used for computational purposes. The main difficulty in the problem as with any open boundary problem is in obtaining stable boundary conditions. Approximate boundary conditions are derived and shown to be stable. Numerical results are presented to verify the effectiveness of these boundary conditions.

  9. How accurate are adolescents in portion-size estimation using the computer tool Young Adolescents' Nutrition Assessment on Computer (YANA-C)?

    PubMed

    Vereecken, Carine; Dohogne, Sophie; Covents, Marc; Maes, Lea

    2010-06-01

    Computer-administered questionnaires have received increased attention for large-scale population research on nutrition. In Belgium-Flanders, Young Adolescents' Nutrition Assessment on Computer (YANA-C) has been developed. In this tool, standardised photographs are available to assist in portion-size estimation. The purpose of the present study is to assess how accurate adolescents are in estimating portion sizes of food using YANA-C. A convenience sample, aged 11-17 years, estimated the amounts of ten commonly consumed foods (breakfast cereals, French fries, pasta, rice, apple sauce, carrots and peas, crisps, creamy velouté, red cabbage, and peas). Two procedures were followed: (1) short-term recall: adolescents (n 73) self-served their usual portions of the ten foods and estimated the amounts later the same day; (2) real-time perception: adolescents (n 128) estimated two sets (different portions) of pre-weighed portions displayed near the computer. Self-served portions were, on average, 8 % underestimated; significant underestimates were found for breakfast cereals, French fries, peas, and carrots and peas. Spearman's correlations between the self-served and estimated weights varied between 0.51 and 0.84, with an average of 0.72. The kappa statistics were moderate (>0.4) for all but one item. Pre-weighed portions were, on average, 15 % underestimated, with significant underestimates for fourteen of the twenty portions. Photographs of food items can serve as a good aid in ranking subjects; however, to assess the actual intake at a group level, underestimation must be considered.

  10. Hydrogen sulfide detection based on reflection: from a poison test approach of ancient China to single-cell accurate localization.

    PubMed

    Kong, Hao; Ma, Zhuoran; Wang, Song; Gong, Xiaoyun; Zhang, Sichun; Zhang, Xinrong

    2014-08-05

    With the inspiration of an ancient Chinese poison test approach, we report a rapid hydrogen sulfide detection strategy in specific areas of live cells using silver needles with good spatial resolution of 2 × 2 μm(2). Besides the accurate-localization ability, this reflection-based strategy also has attractive merits of convenience and robust response when free pretreatment and short detection time are concerned. The success of endogenous H2S level evaluation in cellular cytoplasm and nuclear of human A549 cells promises the application potential of our strategy in scientific research and medical diagnosis.

  11. A Computational Approach to Competitive Range Expansions

    NASA Astrophysics Data System (ADS)

    Weber, Markus F.; Poxleitner, Gabriele; Hebisch, Elke; Frey, Erwin; Opitz, Madeleine

    2014-03-01

    Bacterial communities represent complex and dynamic ecological systems. Environmental conditions and microbial interactions determine whether a bacterial strain survives an expansion to new territory. In our work, we studied competitive range expansions in a model system of three Escherichia coli strains. In this system, a colicin producing strain competed with a colicin resistant, and with a colicin sensitive strain for new territory. Genetic engineering allowed us to tune the strains' growth rates and to study their expansion in distinct ecological scenarios (with either cyclic or hierarchical dominance). The control over growth rates also enabled us to construct and to validate a predictive computational model of the bacterial dynamics. The model rested on an agent-based, coarse-grained description of the expansion process and we conducted independent experiments on the growth of single-strain colonies for its parametrization. Furthermore, the model considered the long-range nature of the toxin interaction between strains. The integration of experimental analysis with computational modeling made it possible to quantify how the level of biodiversity depends on the interplay between bacterial growth rates, the initial composition of the inoculum, and the toxin range.

  12. Novel Computational Approaches to Drug Discovery

    NASA Astrophysics Data System (ADS)

    Skolnick, Jeffrey; Brylinski, Michal

    2010-01-01

    New approaches to protein functional inference based on protein structure and evolution are described. First, FINDSITE, a threading based approach to protein function prediction, is summarized. Then, the results of large scale benchmarking of ligand binding site prediction, ligand screening, including applications to HIV protease, and GO molecular functional inference are presented. A key advantage of FINDSITE is its ability to use low resolution, predicted structures as well as high resolution experimental structures. Then, an extension of FINDSITE to ligand screening in GPCRs using predicted GPCR structures, FINDSITE/QDOCKX, is presented. This is a particularly difficult case as there are few experimentally solved GPCR structures. Thus, we first train on a subset of known binding ligands for a set of GPCRs; this is then followed by benchmarking against a large ligand library. For the virtual ligand screening of a number of Dopamine receptors, encouraging results are seen, with significant enrichment in identified ligands over those found in the training set. Thus, FINDSITE and its extensions represent a powerful approach to the successful prediction of a variety of molecular functions.

  13. A new approach to accurate validation of remote sensing retrieval of evapotranspiration based on data fusion

    NASA Astrophysics Data System (ADS)

    Sun, C.; Jiang, D.; Wang, J.; Zhu, Y.

    2010-03-01

    in accordance with previous studies (Jamieson, 1982; Dugas and Ainsworth, 1985; Benson et al., 1992; Pereira and Nova, 1992). After the data fusion, the correlation (R2=0.8516) between the monthly runoff obtained from the simulation based on ET retrieval and the observed data was higher than that (R2=0.8411) between the data obtained from the PM-based ET simulation and the observed data. As for the RMSE, the result (RMSE=26.0860) between the simulated runoff based on ET retrieval and the observed data was also superior to the result (RMSE=35.71904) between the simulated runoff obtained with PM-based ET and the observed data. As for the MBE parameter, the result (MBE=-8.6578) for the RS retrieval method was obviously better than that (MBE=-22.7313) for the PM-based method. The comparison of them showed that the RS retrieval had better adaptivity and higher accuracy than the PM-based method, and the new approach based on data fusion and the distributed hydrological model was feasible, reliable and worth being studied further.

  14. Computational approaches to natural product discovery

    PubMed Central

    Medema, Marnix H.; Fischbach, Michael A.

    2016-01-01

    From the earliest Streptomyces genome sequences, the promise of natural product genome mining has been captivating: genomics and bioinformatics would transform compound discovery from an ad hoc pursuit to a high-throughput endeavor. Until recently, however, genome mining has advanced natural product discovery only modestly. Here, we argue that the development of algorithms to mine the continuously increasing amounts of (meta)genomic data will enable the promise of genome mining to be realized. We review computational strategies that have been developed to identify biosynthetic gene clusters in genome sequences and predict the chemical structures of their products. We then discuss networking strategies that can systematize large volumes of genetic and chemical data, and connect genomic information to metabolomic and phenotypic data. Finally, we provide a vision of what natural product discovery might look like in the future, specifically considering long-standing questions in microbial ecology regarding the roles of metabolites in interspecies interactions. PMID:26284671

  15. Computational approaches to natural product discovery.

    PubMed

    Medema, Marnix H; Fischbach, Michael A

    2015-09-01

    Starting with the earliest Streptomyces genome sequences, the promise of natural product genome mining has been captivating: genomics and bioinformatics would transform compound discovery from an ad hoc pursuit to a high-throughput endeavor. Until recently, however, genome mining has advanced natural product discovery only modestly. Here, we argue that the development of algorithms to mine the continuously increasing amounts of (meta)genomic data will enable the promise of genome mining to be realized. We review computational strategies that have been developed to identify biosynthetic gene clusters in genome sequences and predict the chemical structures of their products. We then discuss networking strategies that can systematize large volumes of genetic and chemical data and connect genomic information to metabolomic and phenotypic data. Finally, we provide a vision of what natural product discovery might look like in the future, specifically considering longstanding questions in microbial ecology regarding the roles of metabolites in interspecies interactions.

  16. Numerical Computation of Sensitivities and the Adjoint Approach

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael

    1997-01-01

    We discuss the numerical computation of sensitivities via the adjoint approach in optimization problems governed by differential equations. We focus on the adjoint problem in its weak form. We show how one can avoid some of the problems with the adjoint approach, such as deriving suitable boundary conditions for the adjoint equation. We discuss the convergence of numerical approximations of the costate computed via the weak form of the adjoint problem and show the significance for the discrete adjoint problem.

  17. A Social Constructivist Approach to Computer-Mediated Instruction.

    ERIC Educational Resources Information Center

    Pear, Joseph J.; Crone-Todd, Darlene E.

    2002-01-01

    Describes a computer-mediated teaching system called computer-aided personalized system of instruction (CAPSI) that incorporates a social constructivist approach, maintaining that learning occurs primarily through a socially interactive process. Discusses use of CAPSI in an undergraduate course at the University of Manitoba that showed students…

  18. Assessment of the extended Koopmans' theorem for the chemical reactivity: Accurate computations of chemical potentials, chemical hardnesses, and electrophilicity indices.

    PubMed

    Yildiz, Dilan; Bozkaya, Uğur

    2016-01-30

    The extended Koopmans' theorem (EKT) provides a straightforward way to compute ionization potentials and electron affinities from any level of theory. Although it is widely applied to ionization potentials, the EKT approach has not been applied to evaluation of the chemical reactivity. We present the first benchmarking study to investigate the performance of the EKT methods for predictions of chemical potentials (μ) (hence electronegativities), chemical hardnesses (η), and electrophilicity indices (ω). We assess the performance of the EKT approaches for post-Hartree-Fock methods, such as Møller-Plesset perturbation theory, the coupled-electron pair theory, and their orbital-optimized counterparts for the evaluation of the chemical reactivity. Especially, results of the orbital-optimized coupled-electron pair theory method (with the aug-cc-pVQZ basis set) for predictions of the chemical reactivity are very promising; the corresponding mean absolute errors are 0.16, 0.28, and 0.09 eV for μ, η, and ω, respectively.

  19. Fast and Accurate Resonance Assignment of Small-to-Large Proteins by Combining Automated and Manual Approaches

    PubMed Central

    Niklasson, Markus; Ahlner, Alexandra; Andresen, Cecilia; Marsh, Joseph A.; Lundström, Patrik

    2015-01-01

    The process of resonance assignment is fundamental to most NMR studies of protein structure and dynamics. Unfortunately, the manual assignment of residues is tedious and time-consuming, and can represent a significant bottleneck for further characterization. Furthermore, while automated approaches have been developed, they are often limited in their accuracy, particularly for larger proteins. Here, we address this by introducing the software COMPASS, which, by combining automated resonance assignment with manual intervention, is able to achieve accuracy approaching that from manual assignments at greatly accelerated speeds. Moreover, by including the option to compensate for isotope shift effects in deuterated proteins, COMPASS is far more accurate for larger proteins than existing automated methods. COMPASS is an open-source project licensed under GNU General Public License and is available for download from http://www.liu.se/forskning/foass/tidigare-foass/patrik-lundstrom/software?l=en. Source code and binaries for Linux, Mac OS X and Microsoft Windows are available. PMID:25569628

  20. Computational Approaches for Understanding Energy Metabolism

    PubMed Central

    Shestov, Alexander A; Barker, Brandon; Gu, Zhenglong; Locasale, Jason W

    2013-01-01

    There has been a surge of interest in understanding the regulation of metabolic networks involved in disease in recent years. Quantitative models are increasingly being used to i nterrogate the metabolic pathways that are contained within this complex disease biology. At the core of this effort is the mathematical modeling of central carbon metabolism involving glycolysis and the citric acid cycle (referred to as energy metabolism). Here we discuss several approaches used to quantitatively model metabolic pathways relating to energy metabolism and discuss their formalisms, successes, and limitations. PMID:23897661

  1. In silico drug discovery approaches on grid computing infrastructures.

    PubMed

    Wolf, Antje; Shahid, Mohammad; Kasam, Vinod; Ziegler, Wolfgang; Hofmann-Apitius, Martin

    2010-02-01

    The first step in finding a "drug" is screening chemical compound databases against a protein target. In silico approaches like virtual screening by molecular docking are well established in modern drug discovery. As molecular databases of compounds and target structures are becoming larger and more and more computational screening approaches are available, there is an increased need in compute power and more complex workflows. In this regard, computational Grids are predestined and offer seamless compute and storage capacity. In recent projects related to pharmaceutical research, the high computational and data storage demands of large-scale in silico drug discovery approaches have been addressed by using Grid computing infrastructures, in both; pharmaceutical industry as well as academic research. Grid infrastructures are part of the so-called eScience paradigm, where a digital infrastructure supports collaborative processes by providing relevant resources and tools for data- and compute-intensive applications. Substantial computing resources, large data collections and services for data analysis are shared on the Grid infrastructure and can be mobilized on demand. This review gives an overview on the use of Grid computing for in silico drug discovery and tries to provide a vision of future development of more complex and integrated workflows on Grids, spanning from target identification and target validation via protein-structure and ligand dependent screenings to advanced mining of large scale in silico experiments.

  2. An integrated experimental and computational approach for ...

    EPA Pesticide Factsheets

    Enantiomers of chiral molecules commonly exhibit differing pharmacokinetics and toxicities, which can introduce significant uncertainty when evaluating biological and environmental fates and potential risks to humans and the environment. However, racemization (the irreversible transformation of one enantiomer into the racemic mixture) and enantiomerization (the reversible conversion of one enantiomer into the other) are poorly understood. To better understand these processes, we investigated the chiral fungicide, triadimefon, which undergoes racemization in soils, water, and organic solvents. Nuclear magnetic resonance (NMR) and gas chromatography / mass spectrometry (GC/MS) techniques were used to measure the rates of enantiomerization and racemization, deuterium isotope effects, and activation energies for triadimefon in H2O and D2O. From these results we were able to determine that: 1) the alpha-carbonyl carbon of triadimefon is the reaction site; 2) cleavage of the C-H (C-D) bond is the rate-determining step; 3) the reaction is base-catalyzed; and 4) the reaction likely involves a symmetrical intermediate. The B3LYP/6–311 + G** level of theory was used to compute optimized geometries, harmonic vibrational frequencies, nature population analysis, and intrinsic reaction coordinates for triadimefon in water and three racemization pathways were hypothesized. This work provides an initial step in developing predictive, structure-based models that are needed to

  3. An integrated experimental and computational approach for ...

    EPA Pesticide Factsheets

    Enantiomers of chiral molecules commonly exhibit differing pharmacokinetics and toxicities, which can introduce significant uncertainty when evaluating biological and environmental fates and potential risks to humans and the environment. However, racemization (the irreversible transformation of one enantiomer into the racemic mixture) and enantiomerization (the reversible conversion of one enantiomer into the other) are poorly understood. To better understand these processes, we investigated the chiral fungicide, triadimefon, which undergoes racemization in soils, water, and organic solvents. Nuclear magnetic resonance (NMR) and gas chromatography / mass spectrometry (GC/MS) techniques were used to measure the rates of enantiomerization and racemization, deuterium isotope effects, and activation energies for triadimefon in H2O and D2O. From these results we were able to determine that: 1) the alpha-carbonyl carbon of triadimefon is the reaction site; 2) cleavage of the C-H (C-D) bond is the rate-determining step; 3) the reaction is base-catalyzed; and 4) the reaction likely involves a symmetrical intermediate. The B3LYP/6–311 + G** level of theory was used to compute optimized geometries, harmonic vibrational frequencies, nature population analysis, and intrinsic reaction coordinates for triadimefon in water and three racemization pathways were hypothesized. This work provides an initial step in developing predictive, structure-based models that are needed to

  4. Computational fluid dynamics in ventilation: Practical approach

    NASA Astrophysics Data System (ADS)

    Fontaine, J. R.

    The potential of computation fluid dynamics (CFD) for conceiving ventilation systems is shown through the simulation of five practical cases. The following examples are considered: capture of pollutants on a surface treating tank equipped with a unilateral suction slot in the presence of a disturbing air draft opposed to suction; dispersion of solid aerosols inside fume cupboards; performances comparison of two general ventilation systems in a silkscreen printing workshop; ventilation of a large open painting area; and oil fog removal inside a mechanical engineering workshop. Whereas the two first problems are analyzed through two dimensional numerical simulations, the three other cases require three dimensional modeling. For the surface treating tank case, numerical results are compared to laboratory experiment data. All simulations are carried out using EOL, a CFD software specially devised to deal with air quality problems in industrial ventilated premises. It contains many analysis tools to interpret the results in terms familiar to the industrial hygienist. Much experimental work has been engaged to validate the predictions of EOL for ventilation flows.

  5. Multivariate analysis: A statistical approach for computations

    NASA Astrophysics Data System (ADS)

    Michu, Sachin; Kaushik, Vandana

    2014-10-01

    Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.

  6. Aluminium in Biological Environments: A Computational Approach

    PubMed Central

    Mujika, Jon I; Rezabal, Elixabete; Mercero, Jose M; Ruipérez, Fernando; Costa, Dominique; Ugalde, Jesus M; Lopez, Xabier

    2014-01-01

    The increased availability of aluminium in biological environments, due to human intervention in the last century, raises concerns on the effects that this so far “excluded from biology” metal might have on living organisms. Consequently, the bioinorganic chemistry of aluminium has emerged as a very active field of research. This review will focus on our contributions to this field, based on computational studies that can yield an understanding of the aluminum biochemistry at a molecular level. Aluminium can interact and be stabilized in biological environments by complexing with both low molecular mass chelants and high molecular mass peptides. The speciation of the metal is, nonetheless, dictated by the hydrolytic species dominant in each case and which vary according to the pH condition of the medium. In blood, citrate and serum transferrin are identified as the main low molecular mass and high molecular mass molecules interacting with aluminium. The complexation of aluminium to citrate and the subsequent changes exerted on the deprotonation pathways of its tritable groups will be discussed along with the mechanisms for the intake and release of aluminium in serum transferrin at two pH conditions, physiological neutral and endosomatic acidic. Aluminium can substitute other metals, in particular magnesium, in protein buried sites and trigger conformational disorder and alteration of the protonation states of the protein's sidechains. A detailed account of the interaction of aluminium with proteic sidechains will be given. Finally, it will be described how alumnium can exert oxidative stress by stabilizing superoxide radicals either as mononuclear aluminium or clustered in boehmite. The possibility of promotion of Fenton reaction, and production of hydroxyl radicals will also be discussed. PMID:24757505

  7. Unbiased QM/MM approach using accurate multipoles from a linear scaling DFT calculation with a systematic basis set

    NASA Astrophysics Data System (ADS)

    Mohr, Stephan; Genovese, Luigi; Ratcliff, Laura; Masella, Michel

    The quantum mechanics/molecular mechanis (QM/MM) method is a popular approach that allows to perform atomistic simulations using different levels of accuracy. Since only the essential part of the simulation domain is treated using a highly precise (but also expensive) QM method, whereas the remaining parts are handled using a less accurate level of theory, this approach allows to considerably extend the total system size that can be simulated without a notable loss of accuracy. In order to couple the QM and MM regions we use an approximation of the electrostatic potential based on a multipole expansion. The multipoles of the QM region are determined based on the results of a linear scaling Density Functional Theory (DFT) calculation using a set of adaptive, localized basis functions, as implemented within the BigDFT software package. As this determination comes at virtually no extra cost compared to the QM calculation, the coupling between QM and MM region can be done very efficiently. In this presentation I will demonstrate the accuracy of both the linear scaling DFT approach itself as well as of the approximation of the electrostatic potential based on the multipole expansion, and show some first QM/MM applications using the aforementioned approach.

  8. Mutations that Cause Human Disease: A Computational/Experimental Approach

    SciTech Connect

    Beernink, P; Barsky, D; Pesavento, B

    2006-01-11

    International genome sequencing projects have produced billions of nucleotides (letters) of DNA sequence data, including the complete genome sequences of 74 organisms. These genome sequences have created many new scientific opportunities, including the ability to identify sequence variations among individuals within a species. These genetic differences, which are known as single nucleotide polymorphisms (SNPs), are particularly important in understanding the genetic basis for disease susceptibility. Since the report of the complete human genome sequence, over two million human SNPs have been identified, including a large-scale comparison of an entire chromosome from twenty individuals. Of the protein coding SNPs (cSNPs), approximately half leads to a single amino acid change in the encoded protein (non-synonymous coding SNPs). Most of these changes are functionally silent, while the remainder negatively impact the protein and sometimes cause human disease. To date, over 550 SNPs have been found to cause single locus (monogenic) diseases and many others have been associated with polygenic diseases. SNPs have been linked to specific human diseases, including late-onset Parkinson disease, autism, rheumatoid arthritis and cancer. The ability to predict accurately the effects of these SNPs on protein function would represent a major advance toward understanding these diseases. To date several attempts have been made toward predicting the effects of such mutations. The most successful of these is a computational approach called ''Sorting Intolerant From Tolerant'' (SIFT). This method uses sequence conservation among many similar proteins to predict which residues in a protein are functionally important. However, this method suffers from several limitations. First, a query sequence must have a sufficient number of relatives to infer sequence conservation. Second, this method does not make use of or provide any information on protein structure, which can be used to

  9. What is Intrinsic Motivation? A Typology of Computational Approaches

    PubMed Central

    Oudeyer, Pierre-Yves; Kaplan, Frederic

    2007-01-01

    Intrinsic motivation, centrally involved in spontaneous exploration and curiosity, is a crucial concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics. PMID:18958277

  10. Solving post-Newtonian accurate Kepler equation

    NASA Astrophysics Data System (ADS)

    Boetzel, Yannick; Susobhanan, Abhimanyu; Gopakumar, Achamveedu; Klein, Antoine; Jetzer, Philippe

    2017-08-01

    We provide an elegant way of solving analytically the third post-Newtonian (3PN) accurate Kepler equation, associated with the 3PN-accurate generalized quasi-Keplerian parametrization for compact binaries in eccentric orbits. An additional analytic solution is presented to check the correctness of our compact solution and we perform comparisons between our PN-accurate analytic solution and a very accurate numerical solution of the PN-accurate Kepler equation. We adapt our approach to compute crucial 3PN-accurate inputs that will be required to compute analytically both the time and frequency domain ready-to-use amplitude-corrected PN-accurate search templates for compact binaries in inspiralling eccentric orbits.

  11. 4D laser camera for accurate patient positioning, collision avoidance, image fusion and adaptive approaches during diagnostic and therapeutic procedures.

    PubMed

    Brahme, Anders; Nyman, Peter; Skatt, Björn

    2008-05-01

    A four-dimensional (4D) laser camera (LC) has been developed for accurate patient imaging in diagnostic and therapeutic radiology. A complementary metal-oxide semiconductor camera images the intersection of a scanned fan shaped laser beam with the surface of the patient and allows real time recording of movements in a three-dimensional (3D) or four-dimensional (4D) format (3D +time). The LC system was first designed as an accurate patient setup tool during diagnostic and therapeutic applications but was found to be of much wider applicability as a general 4D photon "tag" for the surface of the patient in different clinical procedures. It is presently used as a 3D or 4D optical benchmark or tag for accurate delineation of the patient surface as demonstrated for patient auto setup, breathing and heart motion detection. Furthermore, its future potential applications in gating, adaptive therapy, 3D or 4D image fusion between most imaging modalities and image processing are discussed. It is shown that the LC system has a geometrical resolution of about 0, 1 mm and that the rigid body repositioning accuracy is about 0, 5 mm below 20 mm displacements, 1 mm below 40 mm and better than 2 mm at 70 mm. This indicates a slight need for repeated repositioning when the initial error is larger than about 50 mm. The positioning accuracy with standard patient setup procedures for prostate cancer at Karolinska was found to be about 5-6 mm when independently measured using the LC system. The system was found valuable for positron emission tomography-computed tomography (PET-CT) in vivo tumor and dose delivery imaging where it potentially may allow effective correction for breathing artifacts in 4D PET-CT and image fusion with lymph node atlases for accurate target volume definition in oncology. With a LC system in all imaging and radiation therapy rooms, auto setup during repeated diagnostic and therapeutic procedures may save around 5 min per session, increase accuracy and allow

  12. A simple, stable, and accurate linear tetrahedral finite element for transient, nearly, and fully incompressible solid dynamics: A dynamic variational multiscale approach [A simple, stable, and accurate tetrahedral finite element for transient, nearly incompressible, linear and nonlinear elasticity: A dynamic variational multiscale approach

    SciTech Connect

    Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi; Rossi, Simone

    2015-11-12

    Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear and nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.

  13. A simple, stable, and accurate linear tetrahedral finite element for transient, nearly, and fully incompressible solid dynamics: A dynamic variational multiscale approach [A simple, stable, and accurate tetrahedral finite element for transient, nearly incompressible, linear and nonlinear elasticity: A dynamic variational multiscale approach

    DOE PAGES

    Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi; ...

    2015-11-12

    Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less

  14. Propagation of computer virus both across the Internet and external computers: A complex-network approach

    NASA Astrophysics Data System (ADS)

    Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi; Jin, Jian; He, Li

    2014-08-01

    Based on the assumption that external computers (particularly, infected external computers) are connected to the Internet, and by considering the influence of the Internet topology on computer virus spreading, this paper establishes a novel computer virus propagation model with a complex-network approach. This model possesses a unique (viral) equilibrium which is globally attractive. Some numerical simulations are also given to illustrate this result. Further study shows that the computers with higher node degrees are more susceptible to infection than those with lower node degrees. In this regard, some appropriate protective measures are suggested.

  15. Critiquing: A Different Approach to Expert Computer Advice in Medicine

    PubMed Central

    Miller, Perry L.

    1984-01-01

    The traditional approach to computer-based advice in medicine has been to design systems which simulate a physician's decision process. This paper describes a different approach to computer advice in medicine: a critiquing approach. A critiquing system first asks how the physician is planning to manage his patient and then critiques that plan, discussing the advantages and disadvantages of the proposed approach, compared to other approaches which might be reasonable or preferred. Several critiquing systems are currently in different stages of implementation. The paper describes these systems and discusses the characteristics which make each domain suitable for critiquing. The critiquing approach may prove especially well-suited in domains where decisions involve a great deal of subjective judgement.

  16. Time-Accurate Computations of Free-Flight Aerodynamics of a Spinning Projectile With and Without Flow Control

    DTIC Science & Technology

    2006-09-01

    9 Figure 5. Computed particle traces colored by velocity, jet on, M = 0.39. ................................. 10 Figure 6. Computed...figure 5 at a given instant in time for Mach = 0.39. These traces are colored by the velocity magnitude. The particle traces emanating from the jet...Computed particle traces colored by velocity, jet on, M = 0.39. 11 Figure 6. Computed velocity magnitudes at a given instant in time. Figure 7

  17. Accurate X-Ray Spectral Predictions: An Advanced Self-Consistent-Field Approach Inspired by Many-Body Perturbation Theory

    NASA Astrophysics Data System (ADS)

    Liang, Yufeng; Vinson, John; Pemmaraju, Sri; Drisdell, Walter S.; Shirley, Eric L.; Prendergast, David

    2017-03-01

    Constrained-occupancy delta-self-consistent-field (Δ SCF ) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1 s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The Δ SCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle Δ SCF approach can be rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.

  18. Accurate X-Ray Spectral Predictions: An Advanced Self-Consistent-Field Approach Inspired by Many-Body Perturbation Theory

    DOE PAGES

    Liang, Yufeng; Vinson, John; Pemmaraju, Sri; ...

    2017-03-03

    Constrained-occupancy delta-self-consistent-field (ΔSCF) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The ΔSCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle ΔSCF approach can bemore » rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.« less

  19. The role of accurate quantum mechanical computations in the assignment of vibrational spectra for unstable free radicals: H 2CN and F 2CN as test cases

    NASA Astrophysics Data System (ADS)

    Puzzarini, Cristina; Barone, Vincenzo

    2009-01-01

    The accuracy of anharmonic frequencies for semirigid free radicals obtained by a second order perturbative treatment based on CCSD(T) force fields is investigated for the prototypical H 2CN and F 2CN radicals. B3LYP computations show that most of the DFT errors are related to the harmonic part of the force field, so that hybrid models in which harmonic frequencies computed by coupled-cluster methods are coupled to anharmonic contributions obtained by proper density functionals perform very well. This finding paves the route toward the computation of accurate vibrational frequencies for quite large unstable open-shell species of current biological and/or technological interest.

  20. A Modular Approach to Building Adult Computing Competencies: The Desktop Computer Series.

    ERIC Educational Resources Information Center

    Joseph, John J.

    The Fox Valley Technical Institute's approach to teaching adults about computers is based on three underlying premises: there is a widespread need for adult education related to desktop computers; the needs are not the same for everyone; and to be effective, a program that addresses these needs must be flexible, pertinent, and current. (Desktop is…

  1. A scalable and accurate method for classifying protein-ligand binding geometries using a MapReduce approach.

    PubMed

    Estrada, T; Zhang, B; Cicotti, P; Armen, R S; Taufer, M

    2012-07-01

    We present a scalable and accurate method for classifying protein-ligand binding geometries in molecular docking. Our method is a three-step process: the first step encodes the geometry of a three-dimensional (3D) ligand conformation into a single 3D point in the space; the second step builds an octree by assigning an octant identifier to every single point in the space under consideration; and the third step performs an octree-based clustering on the reduced conformation space and identifies the most dense octant. We adapt our method for MapReduce and implement it in Hadoop. The load-balancing, fault-tolerance, and scalability in MapReduce allow screening of very large conformation spaces not approachable with traditional clustering methods. We analyze results for docking trials for 23 protein-ligand complexes for HIV protease, 21 protein-ligand complexes for Trypsin, and 12 protein-ligand complexes for P38alpha kinase. We also analyze cross docking trials for 24 ligands, each docking into 24 protein conformations of the HIV protease, and receptor ensemble docking trials for 24 ligands, each docking in a pool of HIV protease receptors. Our method demonstrates significant improvement over energy-only scoring for the accurate identification of native ligand geometries in all these docking assessments. The advantages of our clustering approach make it attractive for complex applications in real-world drug design efforts. We demonstrate that our method is particularly useful for clustering docking results using a minimal ensemble of representative protein conformational states (receptor ensemble docking), which is now a common strategy to address protein flexibility in molecular docking.

  2. Highly Accurate Infrared Line Lists of SO2 Isotopologues Computed for Atmospheric Modeling on Venus and Exoplanets

    NASA Astrophysics Data System (ADS)

    Huang, X.; Schwenke, D.; Lee, T. J.

    2014-12-01

    Last year we reported a semi-empirical 32S16O2 spectroscopic line list (denoted Ames-296K) for its atmospheric characterization in Venus and other Exoplanetary environments. In order to facilitate the Sulfur isotopic ratio and Sulfur chemistry model determination, now we present Ames-296K line lists for both 626 (upgraded) and other 4 symmetric isotopologues: 636, 646, 666 and 828. The line lists are computed on an ab initio potential energy surface refined with most reliable high resolution experimental data, using a high quality CCSD(T)/aug-cc-pV(Q+d)Z dipole moment surface. The most valuable part of our approach is to provide "truly reliable" predictions (and alternatives) for those unknown or hard-to-measure/analyze spectra. This strategy has guaranteed the lists are the best available alternative for those wide spectra region missing from spectroscopic databases such as HITRAN and GEISA, where only very limited data exist for 626/646 and no Infrared data at all for 636/666 or other minor isotopologues. Our general line position accuracy up to 5000 cm-1 is 0.01 - 0.02 cm-1 or better. Most transition intensity deviations are less than 5%, compare to experimentally measured quantities. Note that we have solved a convergence issue and further improved the quality and completeness of the main isotopologue 626 list at 296K. We will compare the lists to available models in CDMS/JPL/HITRAN and discuss the future mutually beneficial interactions between theoretical and experimental efforts.

  3. Realistic 3D computer model of the gerbil middle ear, featuring accurate morphology of bone and soft tissue structures.

    PubMed

    Buytaert, Jan A N; Salih, Wasil H M; Dierick, Manual; Jacobs, Patric; Dirckx, Joris J J

    2011-12-01

    In order to improve realism in middle ear (ME) finite-element modeling (FEM), comprehensive and precise morphological data are needed. To date, micro-scale X-ray computed tomography (μCT) recordings have been used as geometric input data for FEM models of the ME ossicles. Previously, attempts were made to obtain these data on ME soft tissue structures as well. However, due to low X-ray absorption of soft tissue, quality of these images is limited. Another popular approach is using histological sections as data for 3D models, delivering high in-plane resolution for the sections, but the technique is destructive in nature and registration of the sections is difficult. We combine data from high-resolution μCT recordings with data from high-resolution orthogonal-plane fluorescence optical-sectioning microscopy (OPFOS), both obtained on the same gerbil specimen. State-of-the-art μCT delivers high-resolution data on the 3D shape of ossicles and other ME bony structures, while the OPFOS setup generates data of unprecedented quality both on bone and soft tissue ME structures. Each of these techniques is tomographic and non-destructive and delivers sets of automatically aligned virtual sections. The datasets coming from different techniques need to be registered with respect to each other. By combining both datasets, we obtain a complete high-resolution morphological model of all functional components in the gerbil ME. The resulting 3D model can be readily imported in FEM software and is made freely available to the research community. In this paper, we discuss the methods used, present the resulting merged model, and discuss the morphological properties of the soft tissue structures, such as muscles and ligaments.

  4. An approach to computing direction relations between separated object groups

    NASA Astrophysics Data System (ADS)

    Yan, H.; Wang, Z.; Li, J.

    2013-09-01

    Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups, and then it constructs the Voronoi diagram between the two groups using the triangular network. After this, the normal of each Voronoi edge is calculated, and the quantitative expression of the direction relations is constructed. Finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.

  5. An approach to computing direction relations between separated object groups

    NASA Astrophysics Data System (ADS)

    Yan, H.; Wang, Z.; Li, J.

    2013-06-01

    Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on Gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups; and then it constructs the Voronoi Diagram between the two groups using the triangular network; after this, the normal of each Vornoi edge is calculated, and the quantitative expression of the direction relations is constructed; finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.

  6. A tale of three bio-inspired computational approaches

    NASA Astrophysics Data System (ADS)

    Schaffer, J. David

    2014-05-01

    I will provide a high level walk-through for three computational approaches derived from Nature. First, evolutionary computation implements what we may call the "mother of all adaptive processes." Some variants on the basic algorithms will be sketched and some lessons I have gleaned from three decades of working with EC will be covered. Then neural networks, computational approaches that have long been studied as possible ways to make "thinking machines", an old dream of man's, and based upon the only known existing example of intelligence. Then, a little overview of attempts to combine these two approaches that some hope will allow us to evolve machines we could never hand-craft. Finally, I will touch on artificial immune systems, Nature's highly sophisticated defense mechanism, that has emerged in two major stages, the innate and the adaptive immune systems. This technology is finding applications in the cyber security world.

  7. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  8. Sensing and perception: Connectionist approaches to subcognitive computing

    NASA Technical Reports Server (NTRS)

    Skrrypek, J.

    1987-01-01

    New approaches to machine sensing and perception are presented. The motivation for crossdisciplinary studies of perception in terms of AI and neurosciences is suggested. The question of computing architecture granularity as related to global/local computation underlying perceptual function is considered and examples of two environments are given. Finally, the examples of using one of the environments, UCLA PUNNS, to study neural architectures for visual function are presented.

  9. Direct approach to Gaussian measurement based quantum computation

    NASA Astrophysics Data System (ADS)

    Ferrini, G.; Roslund, J.; Arzani, F.; Fabre, C.; Treps, N.

    2016-12-01

    In this work we introduce an original scheme for measurement based quantum computation in continuous variables. Our approach does not necessarily rely on the use of ancillary cluster states to achieve its aim, but rather on the detection of a resource state in a suitable mode basis followed by digital postprocessing, and involves an optimization of the adjustable experimental parameters. After introducing the general method, we present some examples of application to simple specific computations.

  10. An improved approach for accurate and efficient measurement of common carotid artery intima-media thickness in ultrasound images.

    PubMed

    Li, Qiang; Zhang, Wei; Guan, Xin; Bai, Yu; Jia, Jing

    2014-01-01

    The intima-media thickness (IMT) of common carotid artery (CCA) can serve as an important indicator for the assessment of cardiovascular diseases (CVDs). In this paper an improved approach for automatic IMT measurement with low complexity and high accuracy is presented. 100 ultrasound images from 100 patients were tested with the proposed approach. The ground truth (GT) of the IMT was manually measured for six times and averaged, while the automatic segmented (AS) IMT was computed by the algorithm proposed in this paper. The mean difference±standard deviation between AS and GT IMT is 0.0231±0.0348 mm, and the correlation coefficient between them is 0.9629. The computational time is 0.3223 s per image with MATLAB under Windows XP on an Intel Core 2 Duo CPU E7500 @2.93 GHz. The proposed algorithm has the potential to achieve real-time measurement under Visual Studio.

  11. A distributed computing approach to mission operations support. [for spacecraft

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1975-01-01

    Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.

  12. Computational molecular biology approaches to ligand-target interactions

    PubMed Central

    Lupieri, Paola; Nguyen, Chuong Ha Hung; Bafghi, Zhaleh Ghaemi; Giorgetti, Alejandro; Carloni, Paolo

    2009-01-01

    Binding of small molecules to their targets triggers complex pathways. Computational approaches are keys for predictions of the molecular events involved in such cascades. Here we review current efforts at characterizing the molecular determinants in the largest membrane-bound receptor family, the G-protein-coupled receptors (GPCRs). We focus on odorant receptors, which constitute more than half GPCRs. The work presented in this review uncovers structural and energetic aspects of components of the cellular cascade. Finally, a computational approach in the context of radioactive boron-based antitumoral therapies is briefly described. PMID:20119480

  13. DisoMCS: Accurately Predicting Protein Intrinsically Disordered Regions Using a Multi-Class Conservative Score Approach

    PubMed Central

    Wang, Zhiheng; Yang, Qianqian; Li, Tonghua; Cong, Peisheng

    2015-01-01

    The precise prediction of protein intrinsically disordered regions, which play a crucial role in biological procedures, is a necessary prerequisite to further the understanding of the principles and mechanisms of protein function. Here, we propose a novel predictor, DisoMCS, which is a more accurate predictor of protein intrinsically disordered regions. The DisoMCS bases on an original multi-class conservative score (MCS) obtained by sequence-order/disorder alignment. Initially, near-disorder regions are defined on fragments located at both the terminus of an ordered region connecting a disordered region. Then the multi-class conservative score is generated by sequence alignment against a known structure database and represented as order, near-disorder and disorder conservative scores. The MCS of each amino acid has three elements: order, near-disorder and disorder profiles. Finally, the MCS is exploited as features to identify disordered regions in sequences. DisoMCS utilizes a non-redundant data set as the training set, MCS and predicted secondary structure as features, and a conditional random field as the classification algorithm. In predicted near-disorder regions a residue is determined as an order or a disorder according to the optimized decision threshold. DisoMCS was evaluated by cross-validation, large-scale prediction, independent tests and CASP (Critical Assessment of Techniques for Protein Structure Prediction) tests. All results confirmed that DisoMCS was very competitive in terms of accuracy of prediction when compared with well-established publicly available disordered region predictors. It also indicated our approach was more accurate when a query has higher homologous with the knowledge database. Availability The DisoMCS is available at http://cal.tongji.edu.cn/disorder/. PMID:26090958

  14. Towards Accurate Microscopic Calculation of Solvation Entropies: Extending the Restraint Release Approach to Studies of Solvation Effects

    PubMed Central

    Singh, Nidhi; Warshel, Arieh

    2009-01-01

    effectively captures the physics of these entropic effects. The success of the current approach indicates that it should be applicable to the studies of the solvation entropies in the proteins and also, in examining hydrophobic effects. Thus, we believe that the RR approach provides a powerful tool for evaluating the corresponding contributions to the binding entropies and eventually, to the binding free energies. This holds promise for extending the information theory modeling to proteins and protein-ligand complexes in aqueous solutions and consequently, facilitating computer-aided drug design. PMID:19402609

  15. General Approach in Computing Sums of Products of Binary Sequences

    DTIC Science & Technology

    2011-12-08

    General Approach in Computing Sums of Products of Binary Sequences E. Kiliç1, P. Stănică2 1TOBB Economics and Technology University, Mathematics...pstanica@nps.edu December 8, 2011 Abstract In this paper we find a general approach to find closed forms of sums of products of arbitrary sequences ...satisfying the same recurrence with different initial conditions. We apply successfully our technique to sums of products of such sequences with indices in

  16. Novel Approaches to Quantum Computation Using Solid State Qubits

    DTIC Science & Technology

    2007-12-31

    Han, A scheme for the teleportation of multiqubit quantum information via the control of many agents in a network, submitted to Phys. Lett. A, 343...approach, Phys. Rev. B 70, 094513 (2004). 22. C.-P. Yang, S.-I. Chu, and S. Han, Efficient many party controlled teleportation of multiqubit quantum ...June 1, 2001- September 30, 2007 4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER "Novel Approaches to Quantum Computation Using Solid State Qubits" F49620

  17. Multireference correlation consistent composite approach [MR-ccCA]: toward accurate prediction of the energetics of excited and transition state chemistry.

    PubMed

    Oyedepo, Gbenga A; Wilson, Angela K

    2010-08-26

    The correlation consistent Composite Approach, ccCA [ Deyonker , N. J. ; Cundari , T. R. ; Wilson , A. K. J. Chem. Phys. 2006 , 124 , 114104 ] has been demonstrated to predict accurate thermochemical properties of chemical species that can be described by a single configurational reference state, and at reduced computational cost, as compared with ab initio methods such as CCSD(T) used in combination with large basis sets. We have developed three variants of a multireference equivalent of this successful theoretical model. The method, called the multireference correlation consistent composite approach (MR-ccCA), is designed to predict the thermochemical properties of reactive intermediates, excited state species, and transition states to within chemical accuracy (e.g., 1 kcal/mol for enthalpies of formation) of reliable experimental values. In this study, we have demonstrated the utility of MR-ccCA: (1) in the determination of the adiabatic singlet-triplet energy separations and enthalpies of formation for the ground states for a set of diradicals and unsaturated compounds, and (2) in the prediction of energetic barriers to internal rotation, in ethylene and its heavier congener, disilene. Additionally, we have utilized MR-ccCA to predict the enthalpies of formation of the low-lying excited states of all the species considered. MR-ccCA is shown to give quantitative results without reliance upon empirically derived parameters, making it suitable for application to study novel chemical systems with significant nondynamical correlation effects.

  18. Computational approach to the study of thermal spin crossover phenomena

    SciTech Connect

    Rudavskyi, Andrii; Broer, Ria; Sousa, Carmen

    2014-05-14

    The key parameters associated to the thermally induced spin crossover process have been calculated for a series of Fe(II) complexes with mono-, bi-, and tridentate ligands. Combination of density functional theory calculations for the geometries and for normal vibrational modes, and highly correlated wave function methods for the energies, allows us to accurately compute the entropy variation associated to the spin transition and the zero-point corrected energy difference between the low- and high-spin states. From these values, the transition temperature, T{sub 1/2}, is estimated for different compounds.

  19. A multidisciplinary approach to solving computer related vision problems.

    PubMed

    Long, Jennifer; Helland, Magne

    2012-09-01

    This paper proposes a multidisciplinary approach to solving computer related vision issues by including optometry as a part of the problem-solving team. Computer workstation design is increasing in complexity. There are at least ten different professions who contribute to workstation design or who provide advice to improve worker comfort, safety and efficiency. Optometrists have a role identifying and solving computer-related vision issues and in prescribing appropriate optical devices. However, it is possible that advice given by optometrists to improve visual comfort may conflict with other requirements and demands within the workplace. A multidisciplinary approach has been advocated for solving computer related vision issues. There are opportunities for optometrists to collaborate with ergonomists, who coordinate information from physical, cognitive and organisational disciplines to enact holistic solutions to problems. This paper proposes a model of collaboration and examples of successful partnerships at a number of professional levels including individual relationships between optometrists and ergonomists when they have mutual clients/patients, in undergraduate and postgraduate education and in research. There is also scope for dialogue between optometry and ergonomics professional associations. A multidisciplinary approach offers the opportunity to solve vision related computer issues in a cohesive, rather than fragmented way. Further exploration is required to understand the barriers to these professional relationships. © 2012 The College of Optometrists.

  20. A Global Approach to Accurate and Automatic Quantitative Analysis of NMR Spectra by Complex Least-Squares Curve Fitting

    NASA Astrophysics Data System (ADS)

    Martin, Y. L.

    The performance of quantitative analysis of 1D NMR spectra depends greatly on the choice of the NMR signal model. Complex least-squares analysis is well suited for optimizing the quantitative determination of spectra containing a limited number of signals (<30) obtained under satisfactory conditions of signal-to-noise ratio (>20). From a general point of view it is concluded, on the basis of mathematical considerations and numerical simulations, that, in the absence of truncation of the free-induction decay, complex least-squares curve fitting either in the time or in the frequency domain and linear-prediction methods are in fact nearly equivalent and give identical results. However, in the situation considered, complex least-squares analysis in the frequency domain is more flexible since it enables the quality of convergence to be appraised at every resonance position. An efficient data-processing strategy has been developed which makes use of an approximate conjugate-gradient algorithm. All spectral parameters (frequency, damping factors, amplitudes, phases, initial delay associated with intensity, and phase parameters of a baseline correction) are simultaneously managed in an integrated approach which is fully automatable. The behavior of the error as a function of the signal-to-noise ratio is theoretically estimated, and the influence of apodization is discussed. The least-squares curve fitting is theoretically proved to be the most accurate approach for quantitative analysis of 1D NMR data acquired with reasonable signal-to-noise ratio. The method enables complex spectral residuals to be sorted out. These residuals, which can be cumulated thanks to the possibility of correcting for frequency shifts and phase errors, extract systematic components, such as isotopic satellite lines, and characterize the shape and the intensity of the spectral distortion with respect to the Lorentzian model. This distortion is shown to be nearly independent of the chemical species

  1. New Theoretical Approaches for Human-Computer Interaction.

    ERIC Educational Resources Information Center

    Rogers, Yvonne

    2004-01-01

    Presents a critique of recent theoretical developments in the field of human-computer interaction (HCI) together with an overview of HCI practice. This chapter discusses why theoretically based approaches have had little impact on the practice of interaction design and suggests mechanisms to enable designers and researchers to better articulate…

  2. A Unitifed Computational Approach to Oxide Aging Processes

    SciTech Connect

    Bowman, D.J.; Fleetwood, D.M.; Hjalmarson, H.P.; Schultz, P.A.

    1999-01-27

    In this paper we describe a unified, hierarchical computational approach to aging and reliability problems caused by materials changes in the oxide layers of Si-based microelectronic devices. We apply this method to a particular low-dose-rate radiation effects problem

  3. Conformational dynamics of proanthocyanidins: physical and computational approaches

    Treesearch

    Fred L. Tobiason; Richard W. Hemingway; T. Hatano

    1998-01-01

    The interaction of plant polyphenols with proteins accounts for a good part of their commercial (e.g., leather manufacture) and biological (e.g., antimicrobial activity) significance. The interplay between observations of physical data such as crystal structure, NMR analyses, and time-resolved fluorescence with results of computational chemistry approaches has been...

  4. New Theoretical Approaches for Human-Computer Interaction.

    ERIC Educational Resources Information Center

    Rogers, Yvonne

    2004-01-01

    Presents a critique of recent theoretical developments in the field of human-computer interaction (HCI) together with an overview of HCI practice. This chapter discusses why theoretically based approaches have had little impact on the practice of interaction design and suggests mechanisms to enable designers and researchers to better articulate…

  5. Accurate leg length measurement in total hip arthroplasty: a comparison of computer navigation and a simple manual measurement device.

    PubMed

    Ogawa, Kyoichi; Kabata, Tamon; Maeda, Toru; Kajino, Yoshitomo; Tsuchiya, Hiroyuki

    2014-06-01

    Several studies have shown that better placement of the acetabular cup and femoral stem can be achieved in total hip arthroplasty (THA) by using the computer navigation system rather than the free-hand alignment methods. However, there have been no comparisons of the relevant clinical advantages in using the computer navigation as opposed to the manual intraoperative measurement devices. The purpose of this study is to determine whether the use of computer navigation can improve postoperative leg length discrepancy (LLD) compared to the use of the measurement device. We performed a retrospective study comparing 30 computer-assisted THAs with 40 THAs performed using a simple manual measurement device. The postoperative LLD was 3.0 mm (range, 0 to 8 mm) in the computer-assisted group and 2.9 mm (range, 0 to 10 mm) in the device group. Statistically significant difference was not seen between the two groups. The results showed good equalization of the leg lengths using both computed tomography-based navigation and the simple manual measurement device.

  6. Computing solvent-induced forces in the solvation approach called Semi Explicit Assembly

    NASA Astrophysics Data System (ADS)

    Brini, Emiliano; Hummel, Michelle H.; Coutsias, Evangelos A.; Fennell, Christopher J.; Dill, Ken A.

    2014-03-01

    Many biologically relevant processes (e.g. protein folding) are often too big and slow to be simulated by computer methods that model atomically detailed water. Faster physical models of water are needed. We have developed an approach called Semi Explicit Assembly (SEA) [C.J. Fennell, C.W. Kehoe, K.A. Dill, PNAS, 108, 3234 (2011)]. It is physical because it uses pre-simulations of explicit-solvent models, and it is fast because at runtime, we just combine the pre-simulated results in rapid computations. SEA has also now been proven physically accurate in two blind tests called SAMPL. Here, we describe the computation of solvation forces in SEA, so that this solvation procedure can be incorporated into standard molecular dynamics codes. We describe experimental tests.

  7. Time-Accurate, Unstructured-Mesh Navier-Stokes Computations with the Space-Time CESE Method

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2006-01-01

    Application of the newly emerged space-time conservation element solution element (CESE) method to compressible Navier-Stokes equations is studied. In contrast to Euler equations solvers, several issues such as boundary conditions, numerical dissipation, and grid stiffness warrant systematic investigations and validations. Non-reflecting boundary conditions applied at the truncated boundary are also investigated from the stand point of acoustic wave propagation. Validations of the numerical solutions are performed by comparing with exact solutions for steady-state as well as time-accurate viscous flow problems. The test cases cover a broad speed regime for problems ranging from acoustic wave propagation to 3D hypersonic configurations. Model problems pertinent to hypersonic configurations demonstrate the effectiveness of the CESE method in treating flows with shocks, unsteady waves, and separations. Good agreement with exact solutions suggests that the space-time CESE method provides a viable alternative for time-accurate Navier-Stokes calculations of a broad range of problems.

  8. Accurate and computationally efficient prediction of thermochemical properties of biomolecules using the generalized connectivity-based hierarchy.

    PubMed

    Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-08-14

    In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.

  9. Cloud Computing – A Unified Approach for Surveillance Issues

    NASA Astrophysics Data System (ADS)

    Rachana, C. R.; Banu, Reshma, Dr.; Ahammed, G. F. Ali, Dr.; Parameshachari, B. D., Dr.

    2017-08-01

    Cloud computing describes highly scalable resources provided as an external service via the Internet on a basis of pay-per-use. From the economic point of view, the main attractiveness of cloud computing is that users only use what they need, and only pay for what they actually use. Resources are available for access from the cloud at any time, and from any location through networks. Cloud computing is gradually replacing the traditional Information Technology Infrastructure. Securing data is one of the leading concerns and biggest issue for cloud computing. Privacy of information is always a crucial pointespecially when an individual’s personalinformation or sensitive information is beingstored in the organization. It is indeed true that today; cloud authorization systems are notrobust enough. This paper presents a unified approach for analyzing the various security issues and techniques to overcome the challenges in the cloud environment.

  10. FILMPAR: A parallel algorithm designed for the efficient and accurate computation of thin film flow on functional surfaces containing micro-structure

    NASA Astrophysics Data System (ADS)

    Lee, Y. C.; Thompson, H. M.; Gaskell, P. H.

    2009-12-01

    FILMPAR is a highly efficient and portable parallel multigrid algorithm for solving a discretised form of the lubrication approximation to three-dimensional, gravity-driven, continuous thin film free-surface flow over substrates containing micro-scale topography. While generally applicable to problems involving heterogeneous and distributed features, for illustrative purposes the algorithm is benchmarked on a distributed memory IBM BlueGene/P computing platform for the case of flow over a single trench topography, enabling direct comparison with complementary experimental data and existing serial multigrid solutions. Parallel performance is assessed as a function of the number of processors employed and shown to lead to super-linear behaviour for the production of mesh-independent solutions. In addition, the approach is used to solve for the case of flow over a complex inter-connected topographical feature and a description provided of how FILMPAR could be adapted relatively simply to solve for a wider class of related thin film flow problems. Program summaryProgram title: FILMPAR Catalogue identifier: AEEL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 530 421 No. of bytes in distributed program, including test data, etc.: 1 960 313 Distribution format: tar.gz Programming language: C++ and MPI Computer: Desktop, server Operating system: Unix/Linux Mac OS X Has the code been vectorised or parallelised?: Yes. Tested with up to 128 processors RAM: 512 MBytes Classification: 12 External routines: GNU C/C++, MPI Nature of problem: Thin film flows over functional substrates containing well-defined single and complex topographical features are of enormous significance, having a wide variety of engineering

  11. iPE-MMR: An integrated approach to accurately assign monoisotopic precursor masses to tandem mass spectrometric data

    PubMed Central

    Jung, Hee-Jung; Purvine, Samuel O.; Kim, Hokeun; Petyuk, Vladislav A.; Hyung, Seok-Won; Monroe, Matthew E.; Mun, Dong-Gi; Kim, Kyong-Chul; Park, Jong-Moon; Kim, Su-Jin; Tolic, Nikola; Slysz, Gordon W.; Moore, Ronald J.; Zhao, Rui; Adkins, Joshua N.; Anderson, Gordon A.; Lee, Hookeun; Camp, David G.; Yu, Myeong-Hee; Smith, Richard D.; Lee, Sang-Won

    2010-01-01

    Accurate assignment of monoisotopic precursor masses to tandem mass spectrometric (MS/MS) data is a fundamental and critically important step for successful peptide identifications in mass spectrometry based proteomics. Here we describe an integrated approach that combines three previously reported methods of treating MS/MS data for precursor mass refinement. This combined method, “integrated Post-Experiment Monoisotopic Mass Refinement” (iPE-MMR), integrates steps: 1) generation of refined MS/MS data by DeconMSn; 2) additional refinement of the resultant MS/MS data by a modified version of PE-MMR; 3) elimination of systematic errors of precursor masses using DtaRefinery. iPE-MMR is the first method that utilizes all MS information from multiple MS scans of a precursor ion including multiple charge states, in an MS scan, to determine precursor mass. By combining these methods, iPE-MMR increases sensitivity in peptide identification and provides increased accuracy when applied to complex high-throughput proteomics data. PMID:20863060

  12. Cloud computing approaches to accelerate drug discovery value chain.

    PubMed

    Garg, Vibhav; Arora, Suchir; Gupta, Chitra

    2011-12-01

    Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine.

  13. Computational intelligence approaches for pattern discovery in biological systems.

    PubMed

    Fogel, Gary B

    2008-07-01

    Biology, chemistry and medicine are faced by tremendous challenges caused by an overwhelming amount of data and the need for rapid interpretation. Computational intelligence (CI) approaches such as artificial neural networks, fuzzy systems and evolutionary computation are being used with increasing frequency to contend with this problem, in light of noise, non-linearity and temporal dynamics in the data. Such methods can be used to develop robust models of processes either on their own or in combination with standard statistical approaches. This is especially true for database mining, where modeling is a key component of scientific understanding. This review provides an introduction to current CI methods, their application to biological problems, and concludes with a commentary about the anticipated impact of these approaches in bioinformatics.

  14. Magnetic resonance imaging: an accurate, radiation-free, alternative to computed tomography for the primary imaging and three-dimensional reconstruction of the bony orbit.

    PubMed

    Schmutz, Beat; Rahmel, Benjamin; McNamara, Zeb; Coulthard, Alan; Schuetz, Michael; Lynham, Anthony

    2014-03-01

    To determine the extent to which the accuracy of magnetic resonance imaging (MRI) based virtual 3-dimensional (3D) models of the intact orbit can approach that of the gold standard, computed tomography (CT) based models. The goal was to determine whether MRI is a viable alternative to CT scans in patients with isolated orbital fractures and penetrating eye injuries, pediatric patients, and patients requiring multiple scans in whom radiation exposure is ideally limited. Patients who presented with unilateral orbital fractures to the Royal Brisbane and Women's Hospital from March 2011 to March 2012 were recruited to participate in this cross-sectional study. The primary predictor variable was the imaging technique (MRI vs CT). The outcome measurements were orbital volume (primary outcome) and geometric intraorbital surface deviations (secondary outcome) between the MRI- and CT-based 3D models. Eleven subjects (9 male) were enrolled. The patients' mean age was 30 years. On average, the MRI models underestimated the orbital volume of the CT models by 0.50 ± 0.19 cm(3). The average intraorbital surface deviation between the MRI and CT models was 0.34 ± 0.32 mm, with 78 ± 2.7% of the surface within a tolerance of ±0.5 mm. The volumetric differences of the MRI models are comparable to reported results from CT models. The intraorbital MRI surface deviations are smaller than the accepted tolerance for orbital surgical reconstructions. Therefore, the authors believe that MRI is an accurate radiation-free alternative to CT for the primary imaging and 3D reconstruction of the bony orbit. Copyright © 2014 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  15. Accurate computations of the structures and binding energies of the imidazole⋯benzene and pyrrole⋯benzene complexes

    NASA Astrophysics Data System (ADS)

    Ahnen, Sandra; Hehn, Anna-Sophia; Vogiatzis, Konstantinos D.; Trachsel, Maria A.; Leutwyler, Samuel; Klopper, Wim

    2014-09-01

    Using explicitly-correlated coupled-cluster theory with single and double excitations, the intermolecular distances and interaction energies of the T-shaped imidazole⋯benzene and pyrrole⋯benzene complexes have been computed in a large augmented correlation-consistent quadruple-zeta basis set, adding also corrections for connected triple excitations and remaining basis-set-superposition errors. The results of these computations are used to assess other methods such as Møller-Plesset perturbation theory (MP2), spin-component-scaled MP2 theory, dispersion-weighted MP2 theory, interference-corrected explicitly-correlated MP2 theory, dispersion-corrected double-hybrid density-functional theory (DFT), DFT-based symmetry-adapted perturbation theory, the random-phase approximation, explicitly-correlated ring-coupled-cluster-doubles theory, and double-hybrid DFT with a correlation energy computed in the random-phase approximation.

  16. Computing 3-D steady supersonic flow via a new Lagrangian approach

    NASA Technical Reports Server (NTRS)

    Loh, C. Y.; Liou, M.-S.

    1993-01-01

    The new Lagrangian method introduced by Loh and Hui (1990) is extended for 3-D steady supersonic flow computation. Details of the conservation form, the implementation of the local Riemann solver, and the Godunov and the high resolution TVD schemes are presented. The new approach is robust yet accurate, capable of handling complicated geometry and reactions between discontinuous waves. It keeps all the advantages claimed in the 2-D method of Loh and Hui, e.g., crisp resolution for a slip surface (contact discontinuity) and automatic grid generation along the stream.

  17. Numerical ray-tracing approach with laser intensity distribution for LIDAR signal power function computation

    NASA Astrophysics Data System (ADS)

    Shi, Guangyuan; Li, Song; Huang, Ke; Li, Zile; Zheng, Guoxing

    2016-10-01

    We have developed a new numerical ray-tracing approach for LIDAR signal power function computation, in which the light round-trip propagation is analyzed by geometrical optics and a simple experiment is employed to acquire the laser intensity distribution. It is relatively more accurate and flexible than previous methods. We emphatically discuss the relationship between the inclined angle and the dynamic range of detector output signal in biaxial LIDAR system. Results indicate that an appropriate negative angle can compress the signal dynamic range. This technique has been successfully proved by comparison with real measurements.

  18. Computing 3-D steady supersonic flow via a new Lagrangian approach

    NASA Technical Reports Server (NTRS)

    Loh, C. Y.; Liou, M.-S.

    1993-01-01

    The new Lagrangian method introduced by Loh and Hui (1990) is extended for 3-D steady supersonic flow computation. Details of the conservation form, the implementation of the local Riemann solver, and the Godunov and the high resolution TVD schemes are presented. The new approach is robust yet accurate, capable of handling complicated geometry and reactions between discontinuous waves. It keeps all the advantages claimed in the 2-D method of Loh and Hui, e.g., crisp resolution for a slip surface (contact discontinuity) and automatic grid generation along the stream.

  19. Analytical and Computational Properties of Distributed Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis

  20. One approach for evaluating the Distributed Computing Design System (DCDS)

    NASA Technical Reports Server (NTRS)

    Ellis, J. T.

    1985-01-01

    The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.

  1. Parallel computing alters approaches, raises integration challenges in reservoir modeling

    SciTech Connect

    Shiralkar, G.S.; Volz, R.F.; Stephenson, R.E.; Valle, M.J.; Hird, K.B.

    1996-05-20

    Parallel computing is emerging as an important force in reservoir characterization, with the potential of altering the way one approaches reservoir modeling. In just hours, it is possible to routinely simulate the fluid flow in reservoir models 10 times larger than the largest studies conducted previously within Amoco. Although parallel computing provides solutions to reservoir characterization problems not possible in the past, such a state-of-the-art technology also raises several new problems, including the need to handle large amounts of data and data integration. This paper presents a reservoir study recently conducted by Amoco providing a showcase for these emerging technologies.

  2. The DYNAMO Simulation Language--An Alternate Approach to Computer Science Education.

    ERIC Educational Resources Information Center

    Bronson, Richard

    1986-01-01

    Suggests the use of computer simulation of continuous systems as a problem solving approach to computer languages. Outlines the procedures that the system dynamics approach employs in computer simulations. Explains the advantages of the special purpose language, DYNAMO. (ML)

  3. Proteome analysis of Desulfovibrio desulfuricans G20 mutants using the accurate mass and time (AMT) tag approach.

    PubMed

    Luo, Qingwei; Hixson, Kim K; Callister, Steven J; Lipton, Mary S; Morris, Brandon E L; Krumholz, Lee R

    2007-08-01

    Abundance values obtained from direct LC-MS analyses were used to compare the proteomes of six transposon-insertion mutants of Desulfovibrio desulfuricans G20, the lab strain (G20lab) and a sediment-adapted strain (G20sediment). Three mutations were in signal transduction histidine kinases, and three mutations were in other regulatory proteins. The high-throughput accurate mass and time (AMT) tag proteomic approach was utilized to analyze the proteomes. A total of 1318 proteins was identified with high confidence, approximately 35% of all predicted proteins in the D. desulfuricans G20 genome. Proteins from all functional categories were identified. Significant differences in the abundance of 30 proteins were detected between the G20lab strain and the G20sediment strain. Abundances of proteins for energy metabolism, ribosomal synthesis, membrane biosynthesis, transport, and flagellar synthesis were affected in the mutants. Specific examples of proteins down-regulated in mutants include a putative tungstate transport system substrate-binding protein and several proteins related to energy production, for example, 2-oxoacid:acceptor oxidoreductase, cytochrome c-553, and formate acetyltransferase. In addition, several signal transduction mechanism proteins were regulated in one mutant, and the abundances of ferritin and hybrid cluster protein were reduced in another mutant. However, the similar abundance of universal stress proteins, heat shock proteins, and chemotaxis proteins in the mutants revealed that regulation of chemotactic behavior and stress regulation might not be observed under our growth conditions. This study provides the first proteomic overview of several sediment fitness mutants of G20, and evidence for the difference between lab strains and sediment-adapted strains at the protein level.

  4. On the accurate direct computation of the isothermal compressibility for normal quantum simple fluids: application to quantum hard spheres.

    PubMed

    Sesé, Luis M

    2012-06-28

    A systematic study of the direct computation of the isothermal compressibility of normal quantum fluids is presented by analyzing the solving of the Ornstein-Zernike integral (OZ2) equation for the pair correlations between the path-integral necklace centroids. A number of issues related to the accuracy that can be achieved via this sort of procedure have been addressed, paying particular attention to the finite-N effects and to the definition of significant error bars for the estimates of isothermal compressibilities. Extensive path-integral Monte Carlo computations for the quantum hard-sphere fluid (QHS) have been performed in the (N, V, T) ensemble under temperature and density conditions for which dispersion effects dominate the quantum behavior. These computations have served to obtain the centroid correlations, which have been processed further via the numerical solving of the OZ2 equation. To do so, Baxter-Dixon-Hutchinson's variational procedure, complemented with Baumketner-Hiwatari's grand-canonical corrections, has been used. The virial equation of state has also been obtained and several comparisons between different versions of the QHS equation of state have been made. The results show the reliability of the procedure based on isothermal compressibilities discussed herein, which can then be regarded as a useful and quick means of obtaining the equation of state for fluids under quantum conditions involving strong repulsive interactions.

  5. On the accurate direct computation of the isothermal compressibility for normal quantum simple fluids: Application to quantum hard spheres

    NASA Astrophysics Data System (ADS)

    Sesé, Luis M.

    2012-06-01

    A systematic study of the direct computation of the isothermal compressibility of normal quantum fluids is presented by analyzing the solving of the Ornstein-Zernike integral (OZ2) equation for the pair correlations between the path-integral necklace centroids. A number of issues related to the accuracy that can be achieved via this sort of procedure have been addressed, paying particular attention to the finite-N effects and to the definition of significant error bars for the estimates of isothermal compressibilities. Extensive path-integral Monte Carlo computations for the quantum hard-sphere fluid (QHS) have been performed in the (N, V, T) ensemble under temperature and density conditions for which dispersion effects dominate the quantum behavior. These computations have served to obtain the centroid correlations, which have been processed further via the numerical solving of the OZ2 equation. To do so, Baxter-Dixon-Hutchinson's variational procedure, complemented with Baumketner-Hiwatari's grand-canonical corrections, has been used. The virial equation of state has also been obtained and several comparisons between different versions of the QHS equation of state have been made. The results show the reliability of the procedure based on isothermal compressibilities discussed herein, which can then be regarded as a useful and quick means of obtaining the equation of state for fluids under quantum conditions involving strong repulsive interactions.

  6. Protein Engineering by Combined Computational and In Vitro Evolution Approaches.

    PubMed

    Rosenfeld, Lior; Heyne, Michael; Shifman, Julia M; Papo, Niv

    2016-05-01

    Two alternative strategies are commonly used to study protein-protein interactions (PPIs) and to engineer protein-based inhibitors. In one approach, binders are selected experimentally from combinatorial libraries of protein mutants that are displayed on a cell surface. In the other approach, computational modeling is used to explore an astronomically large number of protein sequences to select a small number of sequences for experimental testing. While both approaches have some limitations, their combination produces superior results in various protein engineering applications. Such applications include the design of novel binders and inhibitors, the enhancement of affinity and specificity, and the mapping of binding epitopes. The combination of these approaches also aids in the understanding of the specificity profiles of various PPIs.

  7. Computational modeling approaches to the dynamics of oncolytic viruses

    PubMed Central

    Wodarz, Dominik

    2016-01-01

    Replicating oncolytic viruses represent a promising treatment approach against cancer, specifically targeting the tumor cells. Significant progress has been made through experimental and clinical studies. Besides these approaches, however, mathematical models can be useful when analyzing the dynamics of virus spread through tumors, because the interactions between a growing tumor and a replicating virus are complex and nonlinear, making them difficult to understand by experimentation alone. Mathematical models have provided significant biological insight into the field of virus dynamics, and similar approaches can be adopted to study oncolytic viruses. The review discusses this approach and highlights some of the challenges that need to be overcome in order to build mathematical and computation models that are clinically predictive. PMID:27001049

  8. A novel cost-effective computer-assisted imaging technology for accurate placement of thoracic pedicle screws.

    PubMed

    Abe, Yuichiro; Ito, Manabu; Abumi, Kuniyoshi; Kotani, Yoshihisa; Sudo, Hideki; Minami, Akio

    2011-11-01

    Use of computer-assisted spine surgery (CASS) technologies, such as navigation systems, to improve the accuracy of pedicle screw (PS) placement is increasingly popular. Despite of their benefits, previous CASS systems are too expensive to be ubiquitously employed, and more affordable and portable systems are desirable. The aim of this study was to introduce a novel and affordable computer-assisted technique that 3-dimensionally visualizes anatomical features of the pedicles and assists in PS insertion. The authors have termed this the 3D-visual guidance technique for inserting pedicle screws (3D-VG TIPS). The 3D-VG technique for placing PSs requires only a consumer-class computer with an inexpensive 3D DICOM viewer; other special equipment is unnecessary. Preoperative CT data of the spine were collected for each patient using the 3D-VG TIPS. In this technique, the anatomical axis of each pedicle can be analyzed by volume-rendered 3D models, as with existing navigation systems, and both the ideal entry point and the trajectory of each PS can be visualized on the surface of 3D-rendered images. Intraoperative guidance slides are made from these images and displayed on a TV monitor in the operating room. The surgeon can insert PSs according to these guidance slides. The authors enrolled 30 patients with adolescent idiopathic scoliosis (AIS) who underwent posterior fusion with segmental screw fixation for validation of this technique. The novel technique allowed surgeons, from office or home, to evaluate the precise anatomy of each pedicle and the risks of screw misplacement, and to perform 3D preoperative planning for screw placement on their own computer. Looking at both 3D guidance images on a TV monitor and the bony structures of the posterior elements in each patient in the operating theater, surgeons were able to determine the best entry point for each PS with ease and confidence. Using the current technique, the screw malposition rate was 4.5% in the thoracic

  9. Beyond the Melnikov method: A computer assisted approach

    NASA Astrophysics Data System (ADS)

    Capiński, Maciej J.; Zgliczyński, Piotr

    2017-01-01

    We present a Melnikov type approach for establishing transversal intersections of stable/unstable manifolds of perturbed normally hyperbolic invariant manifolds (NHIMs). The method is based on a new geometric proof of the normally hyperbolic invariant manifold theorem, which establishes the existence of a NHIM, together with its associated invariant manifolds and bounds on their first and second derivatives. We do not need to know the explicit formulas for the homoclinic orbits prior to the perturbation. We also do not need to compute any integrals along such homoclinics. All needed bounds are established using rigorous computer assisted numerics. Lastly, and most importantly, the method establishes intersections for an explicit range of parameters, and not only for perturbations that are 'small enough', as is the case in the classical Melnikov approach.

  10. Computer Programs. A Systems Approach to Placement and Follow-Up: A Computer Model.

    ERIC Educational Resources Information Center

    Jones, Charles B.

    The computer programs utilized in a systems approach to job placement and followup for students in the Bryan Independent School District, of Bryan Texas public high schools are presented in this document. The programs are for record update, delete, change format, followup summary, followup detail, roster, mailer, and nonvocational followup…

  11. The Visualization Management System Approach To Visualization In Scientific Computing

    NASA Astrophysics Data System (ADS)

    Butler, David M.; Pendley, Michael H.

    1989-09-01

    We introduce the visualization management system (ViMS), a new approach to the development of software for visualization in scientific computing (ViSC). The conceptual foundation for a ViMS is an abstract visualization model which specifies a class of geometric objects, the graphic representations of the objects and the operations on both. A ViMS provides a modular implementation of its visualization model. We describe ViMS requirements and a model-independent ViMS architecture. We briefly describe the vector bundle visualization model and the visualization taxonomy it generates. We conclude by summarizing the benefits of the ViMS approach.

  12. Style: A Computational and Conceptual Blending-Based Approach

    NASA Astrophysics Data System (ADS)

    Goguen, Joseph A.; Harrell, D. Fox

    This chapter proposes a new approach to style, arising from our work on computational media using structural blending, which enriches the conceptual blending of cognitive linguistics with structure building operations in order to encompass syntax and narrative as well as metaphor. We have implemented both conceptual and structural blending, and conducted initial experiments with poetry, including interactive multimedia poetry, although the approach generalizes to other media. The central idea is to generate multimedia content and analyze style in terms of blending principles, based on our finding that different principles from those of common sense blending are often needed for some contemporary poetic metaphors.

  13. A computer-aided approach to nonlinear control systhesis

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Anthony, Tobin

    1988-01-01

    The major objective of this project is to develop a computer-aided approach to nonlinear stability analysis and nonlinear control system design. This goal is to be obtained by refining the describing function method as a synthesis tool for nonlinear control design. The interim report outlines the approach by this study to meet these goals including an introduction to the INteractive Controls Analysis (INCA) program which was instrumental in meeting these study objectives. A single-input describing function (SIDF) design methodology was developed in this study; coupled with the software constructed in this study, the results of this project provide a comprehensive tool for design and integration of nonlinear control systems.

  14. Computer-based Approaches for Training Interactive Digital Map Displays

    DTIC Science & Technology

    2005-09-01

    SUPPLEMENTARY NOTES Subject Matter POC: Jean L. Dyer 14. ABSTRACT (Maximum 200 words): Five computer-based training approaches for learning digital skills...Training assessment Exploratory Learning Guided ExploratoryTraining Guided Discovery SECURITY CLASSIFICATION OF 19. LIMITATION OF 20. NUMBER 21...the other extreme of letting Soldiers learn a digital interface on their own. The research reported here examined these two conditions and three other

  15. A computational approach for the health care market.

    PubMed

    Montefiori, Marcello; Resta, Marina

    2009-12-01

    In this work we analyze the market for health care through a computational approach that relies on Kohonen's Self-Organizing Maps, and we observe the competition dynamics of health care providers versus those of patients. As a result, we offer a new tool addressing the issue of hospital behaviour and demand mechanism modelling, which conjugates a robust theoretical implementation together with an instrument of deep graphical impact.

  16. Archiving Software Systems: Approaches to Preserve Computational Capabilities

    NASA Astrophysics Data System (ADS)

    King, T. A.

    2014-12-01

    A great deal of effort is made to preserve scientific data. Not only because data is knowledge, but it is often costly to acquire and is sometimes collected under unique circumstances. Another part of the science enterprise is the development of software to process and analyze the data. Developed software is also a large investment and worthy of preservation. However, the long term preservation of software presents some challenges. Software often requires a specific technology stack to operate. This can include software, operating systems and hardware dependencies. One past approach to preserve computational capabilities is to maintain ancient hardware long past its typical viability. On an archive horizon of 100 years, this is not feasible. Another approach to preserve computational capabilities is to archive source code. While this can preserve details of the implementation and algorithms, it may not be possible to reproduce the technology stack needed to compile and run the resulting applications. This future forward dilemma has a solution. Technology used to create clouds and process big data can also be used to archive and preserve computational capabilities. We explore how basic hardware, virtual machines, containers and appropriate metadata can be used to preserve computational capabilities and to archive functional software systems. In conjunction with data archives, this provides scientist with both the data and capability to reproduce the processing and analysis used to generate past scientific results.

  17. WSRC approach to validation of criticality safety computer codes

    SciTech Connect

    Finch, D.R.; Mincey, J.F.

    1991-12-31

    Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (K{sub eff}) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope {sup 236}U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed.

  18. WSRC approach to validation of criticality safety computer codes

    SciTech Connect

    Finch, D.R.; Mincey, J.F.

    1991-01-01

    Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (K{sub eff}) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope {sup 236}U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed.

  19. Accurate heat of formation for fully hydrided LaNi5 via the all-electron FLAPW approach

    NASA Astrophysics Data System (ADS)

    Zhao, Yu-Jun; Freeman, A. J.

    2003-03-01

    It is known that the theoretical/computational determination of the heat of formation for La_2Ni_10H_14, Δ H_f, is overestimated theoretically by 50% or more when a pseudopotential approach is employed.(Tatsumi et al), PRB 64, 184105(2001) Does this signify a failure of first-principles total energy calculations? Here, we employ the full-potential linearized augmented plane wave (FLAPW) method(Wimmer, Krakauer, Weinert, and Freeman, PRB 24), 864 (1981). within both the generalized gradient approximation (GGA) and the localized density approximation (LDA), with a highly precise treatment of the total energy of H2 molecule due to its critical role in the calculation of Δ H_f. The calculated Δ Hf (-31.1 KJ/mol-H_2) and geometry structure within GGA are in excellent agreement with experiment ( ˜ -32 KJ/mol-H_2). While LDA calculations underestimate the volume of LaNi5 by 10.4%, the final value of Δ Hf (-31.2 KJ/mol-H_2) is also in excellent agreement with experiment. These results show the success rather than failure of first-principles calculations. The electronic properties indicate that charge transfer from the interstitial region to the H atoms stabilizes the fully hydrided LaNi_5.

  20. Computation of Accurate Activation Barriers for Methyl-Transfer Reactions of Sulfonium and Ammonium Salts in Aqueous Solution.

    PubMed

    Gunaydin, Hakan; Acevedo, Orlando; Jorgensen, William L; Houk, K N

    2007-05-01

    The energetics of methyl-transfer reactions from dimethylammonium, tetramethylammonium, and trimethylsulfonium to dimethylamine were computed with density functional theory, MP2, CBS-QB3, and quantum mechanics/molecular mechanics (QM/MM) Monte Carlo methods. At the CBS-QB3 level, the gas-phase activation enthalpies are computed to be 9.9, 15.3, and 7.9 kcal/mol, respectively. MP2/6-31+G(d,p) activation enthalpies are in best agreement with the CBS-QB3 results. The effects of aqueous solvation on these reactions were studied with polarizable continuum model, generalized Born/surface area (GB/SA), and QM/MM Monte Carlo simulations utilizing free-energy perturbation theory in which the PDDG/PM3 semiempirical Hamiltonian for the QM and explicit TIP4P water molecules in the MM region were used. In the aqueous phase, all of these reactions proceed more slowly when compared to the gas phase, since the charged reactants are stabilized more than the transition structure geometries with delocalized positive charges. In order to obtain the aqueous-phase activation free energies, the gas-phase activation free energies were corrected with the solvation free energies obtained from single-point conductor-like polarizable continuum model and GB/SA calculations for the stationary points along the reaction coordinate.

  1. CAFE: A Computer Tool for Accurate Simulation of the Regulatory Pool Fire Environment for Type B Packages

    SciTech Connect

    Gritzo, L.A.; Koski, J.A.; Suo-Anttila, A.J.

    1999-03-16

    The Container Analysis Fire Environment computer code (CAFE) is intended to provide Type B package designers with an enhanced engulfing fire boundary condition when combined with the PATRAN/P-Thermal commercial code. Historically an engulfing fire boundary condition has been modeled as {sigma}T{sup 4} where {sigma} is the Stefan-Boltzman constant, and T is the fire temperature. The CAFE code includes the necessary chemistry, thermal radiation, and fluid mechanics to model an engulfing fire. Effects included are the local cooling of gases that form a protective boundary layer that reduces the incoming radiant heat flux to values lower than expected from a simple {sigma}T{sup 4} model. In addition, the effect of object shape on mixing that may increase the local fire temperature is included. Both high and low temperature regions that depend upon the local availability of oxygen are also calculated. Thus the competing effects that can both increase and decrease the local values of radiant heat flux are included in a reamer that is not predictable a-priori. The CAFE package consists of a group of computer subroutines that can be linked to workstation-based thermal analysis codes in order to predict package performance during regulatory and other accident fire scenarios.

  2. Transition to computed radiography: can emergency medicine doctors accurately predict the need of film printing to facilitate optimal patient care?

    PubMed Central

    Yang, Siu Ming; Lo, Chor Man

    2011-01-01

    BACKGROUND: This study aimed to evaluate emergency medicine doctors’ accuracy in predicting the need of film printing in a simulated setting of computed radiography and assess whether this can facilitate optimal patient care. METHODS: Cross sectional study was conducted from 20 March 2009 to 3 April 2009 in 1334 patients. After clinical assessment of those patients who needed X-ray examination, doctors in the emergency department would indicate whether film printing was necessary for subsequent patient care in a simulated computed radiography setting. The final discharge plan was then retrieved from each patient record. Accuracy of doctors’ prediction was calculated by comparing the initial request for radiographic film printing and the final need of film. Doctors with different level of emergency medicine experience would also be analyzed and compared. RESULTS: The sensitivity of predicting film printing was 84.5% and the specificity of predicting no film printing was 91.2%. Positive predictive value was 88.4% while negative predictive value was 88.2%. The overall accuracy was 88.2%. The accuracy of doctors stratified into groups of fellows, higher trainees and basic trainees were 85.4%, 90.5% and 88.5% respectively (P=0.073). CONCLUSIONS: Our study showed that doctors can reliably predict whether film printing is needed after clinical assessment of patients, before actual image viewing. Advanced indication for film printing at the time of imaging request for selected patients can save time for all parties with minimal wastage. PMID:25214980

  3. A new computer vision-based approach to aid the diagnosis of Parkinson's disease.

    PubMed

    Pereira, Clayton R; Pereira, Danilo R; Silva, Francisco A; Masieiro, João P; Weber, Silke A T; Hook, Christian; Papa, João P

    2016-11-01

    Even today, pointing out an exam that can diagnose a patient with Parkinson's disease (PD) accurately enough is not an easy task. Although a number of techniques have been used in search for a more precise method, detecting such illness and measuring its level of severity early enough to postpone its side effects are not straightforward. In this work, after reviewing a considerable number of works, we conclude that only a few techniques address the problem of PD recognition by means of micrography using computer vision techniques. Therefore, we consider the problem of aiding automatic PD diagnosis by means of spirals and meanders filled out in forms, which are then compared with the template for feature extraction. In our work, both the template and the drawings are identified and separated automatically using image processing techniques, thus needing no user intervention. Since we have no registered images, the idea is to obtain a suitable representation of both template and drawings using the very same approach for all images in a fast and accurate approach. The results have shown that we can obtain very reasonable recognition rates (around ≈67%), with the most accurate class being the one represented by the patients, which outnumbered the control individuals in the proposed dataset. The proposed approach seemed to be suitable for aiding in automatic PD diagnosis by means of computer vision and machine learning techniques. Also, meander images play an important role, leading to higher accuracies than spiral images. We also observed that the main problem in detecting PD is the patients in the early stages, who can draw near-perfect objects, which are very similar to the ones made by control patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Simple and highly accurate formulas for the computation of Transverse Mercator coordinates from longitude and isometric latitude

    NASA Astrophysics Data System (ADS)

    Bermejo-Solera, Mercedes; Otero, Jesús

    2009-03-01

    The International GNSS Service (IGS) has been producing the total troposphere zenith path delay (ZPD) product that is based on combined ZPD contributions from several IGS Analysis Centers (AC) since GPS week 890 in 1997. A new approach to the production of the IGS ZPD has been proposed that replaces the direct combination of diverse ZPD products with point positioning estimates using the IGS Combined Final orbit and clock products. The new product was formally adopted in 2007 after several years of concurrent production with the legacy product. We describe here the advantages of the new approach for the IGS ZPD product, which enhance the value of the new ZPD product for climate studies. We also address the impact the IGS adoption in November 2006 of new GPS antenna phase center standards has had on the new ZPD product. Finally we describe plans to further enhance the ZPD products.

  5. Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Hochhalter, Jacob D.

    2016-01-01

    This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.

  6. A simple grand canonical approach to compute the vapor pressure of bulk and finite size systems

    SciTech Connect

    Factorovich, Matías H.; Scherlis, Damián A.

    2014-02-14

    In this article we introduce a simple grand canonical screening (GCS) approach to accurately compute vapor pressures from molecular dynamics or Monte Carlo simulations. This procedure entails a screening of chemical potentials using a conventional grand canonical scheme, and therefore it is straightforward to implement for any kind of interface. The scheme is validated against data obtained from Gibbs ensemble simulations for water and argon. Then, it is applied to obtain the vapor pressure of the coarse-grained mW water model, and it is shown that the computed value is in excellent accord with the one formally deduced using statistical thermodynamics arguments. Finally, this methodology is used to calculate the vapor pressure of a water nanodroplet of 94 molecules. Interestingly, the result is in perfect agreement with the one predicted by the Kelvin equation for a homogeneous droplet of that size.

  7. A simple grand canonical approach to compute the vapor pressure of bulk and finite size systems.

    PubMed

    Factorovich, Matías H; Molinero, Valeria; Scherlis, Damián A

    2014-02-14

    In this article we introduce a simple grand canonical screening (GCS) approach to accurately compute vapor pressures from molecular dynamics or Monte Carlo simulations. This procedure entails a screening of chemical potentials using a conventional grand canonical scheme, and therefore it is straightforward to implement for any kind of interface. The scheme is validated against data obtained from Gibbs ensemble simulations for water and argon. Then, it is applied to obtain the vapor pressure of the coarse-grained mW water model, and it is shown that the computed value is in excellent accord with the one formally deduced using statistical thermodynamics arguments. Finally, this methodology is used to calculate the vapor pressure of a water nanodroplet of 94 molecules. Interestingly, the result is in perfect agreement with the one predicted by the Kelvin equation for a homogeneous droplet of that size.

  8. DUET: a server for predicting effects of mutations on protein stability using an integrated computational approach.

    PubMed

    Pires, Douglas E V; Ascher, David B; Blundell, Tom L

    2014-07-01

    Cancer genome and other sequencing initiatives are generating extensive data on non-synonymous single nucleotide polymorphisms (nsSNPs) in human and other genomes. In order to understand the impacts of nsSNPs on the structure and function of the proteome, as well as to guide protein engineering, accurate in silicomethodologies are required to study and predict their effects on protein stability. Despite the diversity of available computational methods in the literature, none has proven accurate and dependable on its own under all scenarios where mutation analysis is required. Here we present DUET, a web server for an integrated computational approach to study missense mutations in proteins. DUET consolidates two complementary approaches (mCSM and SDM) in a consensus prediction, obtained by combining the results of the separate methods in an optimized predictor using Support Vector Machines (SVM). We demonstrate that the proposed method improves overall accuracy of the predictions in comparison with either method individually and performs as well as or better than similar methods. The DUET web server is freely and openly available at http://structure.bioc.cam.ac.uk/duet.

  9. Toward a verifiable approach to the design of concurrent computations

    SciTech Connect

    Chisholm, G.H.

    1993-01-01

    Distributed programs are dependent on explicit message passing between disjoint components of the computation. This paper is concerned with investigating an approach for proving correctness of distributed programs under an assumed data-exchange capability. Stated informally, the data exchange assumption is that every message is passed correctly, i.e., neither lost nor corrupted. One approach for constructing a proof under this assumption would be to embed an abstract model of the data communications mechanism into the program specification. The Message Passing Interface (MPI) standard provides a basis for such a modal. In support of our investigations, we have developed a high-level specification using the ASLAN specification language. Our specification is based on a generalized communications model from which the MPI modelmay be derived. We describe the specification of this model and an approach to the specification of distributed programs with explicit message passing based on a verifiable data exchange model.

  10. Toward a verifiable approach to the design of concurrent computations

    SciTech Connect

    Chisholm, G.H.

    1993-05-01

    Distributed programs are dependent on explicit message passing between disjoint components of the computation. This paper is concerned with investigating an approach for proving correctness of distributed programs under an assumed data-exchange capability. Stated informally, the data exchange assumption is that every message is passed correctly, i.e., neither lost nor corrupted. One approach for constructing a proof under this assumption would be to embed an abstract model of the data communications mechanism into the program specification. The Message Passing Interface (MPI) standard provides a basis for such a modal. In support of our investigations, we have developed a high-level specification using the ASLAN specification language. Our specification is based on a generalized communications model from which the MPI modelmay be derived. We describe the specification of this model and an approach to the specification of distributed programs with explicit message passing based on a verifiable data exchange model.

  11. Design Optimization for Accurate Flow Simulations in 3D Printed Vascular Phantoms Derived from Computed Tomography Angiography.

    PubMed

    Sommer, Kelsey; Izzo, Richard L; Shepard, Lauren; Podgorsak, Alexander R; Rudin, Stephen; Siddiqui, Adnan H; Wilson, Michael F; Angel, Erin; Said, Zaid; Springer, Michael; Ionita, Ciprian N

    2017-02-11

    3D printing has been used to create complex arterial phantoms to advance device testing and physiological condition evaluation. Stereolithographic (STL) files of patient-specific cardiovascular anatomy are acquired to build cardiac vasculature through advanced mesh-manipulation techniques. Management of distal branches in the arterial tree is important to make such phantoms practicable. We investigated methods to manage the distal arterial flow resistance and pressure thus creating physiologically and geometrically accurate phantoms that can be used for simulations of image-guided interventional procedures with new devices. Patient specific CT data were imported into a Vital Imaging workstation, segmented, and exported as STL files. Using a mesh-manipulation program (Meshmixer) we created flow models of the coronary tree. Distal arteries were connected to a compliance chamber. The phantom was then printed using a Stratasys Connex3 multimaterial printer: the vessel in TangoPlus and the fluid flow simulation chamber in Vero. The model was connected to a programmable pump and pressure sensors measured flow characteristics through the phantoms. Physiological flow simulations for patient-specific vasculature were done for six cardiac models (three different vasculatures comparing two new designs). For the coronary phantom we obtained physiologically relevant waves which oscillated between 80 and 120 mmHg and a flow rate of ~125 ml/min, within the literature reported values. The pressure wave was similar with those acquired in human patients. Thus we demonstrated that 3D printed phantoms can be used not only to reproduce the correct patient anatomy for device testing in image-guided interventions, but also for physiological simulations. This has great potential to advance treatment assessment and diagnosis.

  12. Design optimization for accurate flow simulations in 3D printed vascular phantoms derived from computed tomography angiography

    NASA Astrophysics Data System (ADS)

    Sommer, Kelsey; Izzo, Rick L.; Shepard, Lauren; Podgorsak, Alexander R.; Rudin, Stephen; Siddiqui, Adnan H.; Wilson, Michael F.; Angel, Erin; Said, Zaid; Springer, Michael; Ionita, Ciprian N.

    2017-03-01

    3D printing has been used to create complex arterial phantoms to advance device testing and physiological condition evaluation. Stereolithographic (STL) files of patient-specific cardiovascular anatomy are acquired to build cardiac vasculature through advanced mesh-manipulation techniques. Management of distal branches in the arterial tree is important to make such phantoms practicable. We investigated methods to manage the distal arterial flow resistance and pressure thus creating physiologically and geometrically accurate phantoms that can be used for simulations of image-guided interventional procedures with new devices. Patient specific CT data were imported into a Vital Imaging workstation, segmented, and exported as STL files. Using a mesh-manipulation program (Meshmixer) we created flow models of the coronary tree. Distal arteries were connected to a compliance chamber. The phantom was then printed using a Stratasys Connex3 multimaterial printer: the vessel in TangoPlus and the fluid flow simulation chamber in Vero. The model was connected to a programmable pump and pressure sensors measured flow characteristics through the phantoms. Physiological flow simulations for patient-specific vasculature were done for six cardiac models (three different vasculatures comparing two new designs). For the coronary phantom we obtained physiologically relevant waves which oscillated between 80 and 120 mmHg and a flow rate of 125 ml/min, within the literature reported values. The pressure wave was similar with those acquired in human patients. Thus we demonstrated that 3D printed phantoms can be used not only to reproduce the correct patient anatomy for device testing in image-guided interventions, but also for physiological simulations. This has great potential to advance treatment assessment and diagnosis.

  13. A Computer Vision Approach to Identify Einstein Rings and Arcs

    NASA Astrophysics Data System (ADS)

    Lee, Chien-Hsiu

    2017-03-01

    Einstein rings are rare gems of strong lensing phenomena; the ring images can be used to probe the underlying lens gravitational potential at every position angles, tightly constraining the lens mass profile. In addition, the magnified images also enable us to probe high-z galaxies with enhanced resolution and signal-to-noise ratios. However, only a handful of Einstein rings have been reported, either from serendipitous discoveries or or visual inspections of hundred thousands of massive galaxies or galaxy clusters. In the era of large sky surveys, an automated approach to identify ring pattern in the big data to come is in high demand. Here, we present an Einstein ring recognition approach based on computer vision techniques. The workhorse is the circle Hough transform that recognise circular patterns or arcs in the images. We propose a two-tier approach by first pre-selecting massive galaxies associated with multiple blue objects as possible lens, than use Hough transform to identify circular pattern. As a proof-of-concept, we apply our approach to SDSS, with a high completeness, albeit with low purity. We also apply our approach to other lenses in DES, HSC-SSP, and UltraVISTA survey, illustrating the versatility of our approach.

  14. Computational neuroscience approach to biomarkers and treatments for mental disorders.

    PubMed

    Yahata, Noriaki; Kasai, Kiyoto; Kawato, Mitsuo

    2017-04-01

    Psychiatry research has long experienced a stagnation stemming from a lack of understanding of the neurobiological underpinnings of phenomenologically defined mental disorders. Recently, the application of computational neuroscience to psychiatry research has shown great promise in establishing a link between phenomenological and pathophysiological aspects of mental disorders, thereby recasting current nosology in more biologically meaningful dimensions. In this review, we highlight recent investigations into computational neuroscience that have undertaken either theory- or data-driven approaches to quantitatively delineate the mechanisms of mental disorders. The theory-driven approach, including reinforcement learning models, plays an integrative role in this process by enabling correspondence between behavior and disorder-specific alterations at multiple levels of brain organization, ranging from molecules to cells to circuits. Previous studies have explicated a plethora of defining symptoms of mental disorders, including anhedonia, inattention, and poor executive function. The data-driven approach, on the other hand, is an emerging field in computational neuroscience seeking to identify disorder-specific features among high-dimensional big data. Remarkably, various machine-learning techniques have been applied to neuroimaging data, and the extracted disorder-specific features have been used for automatic case-control classification. For many disorders, the reported accuracies have reached 90% or more. However, we note that rigorous tests on independent cohorts are critically required to translate this research into clinical applications. Finally, we discuss the utility of the disorder-specific features found by the data-driven approach to psychiatric therapies, including neurofeedback. Such developments will allow simultaneous diagnosis and treatment of mental disorders using neuroimaging, thereby establishing 'theranostics' for the first time in clinical

  15. A low-computational approach on gaze estimation with eye touch system.

    PubMed

    Topal, Cihan; Gunal, Serkan; Koçdeviren, Onur; Doğan, Atakan; Gerek, Ömer Nezih

    2014-02-01

    Among various approaches to eye tracking systems, light-reflection based systems with non-imaging sensors, e.g., photodiodes or phototransistors, are known to have relatively low complexity; yet, they provide moderately accurate estimation of the point of gaze. In this paper, a low-computational approach on gaze estimation is proposed using the Eye Touch system, which is a light-reflection based eye tracking system, previously introduced by the authors. Based on the physical implementation of Eye Touch, the sensor measurements are now utilized in low-computational least-squares algorithms to estimate arbitrary gaze directions, unlike the existing light reflection-based systems, including the initial Eye Touch implementation, where only limited predefined regions were distinguished. The system also utilizes an effective pattern classification algorithm to be able to perform left, right, and double clicks based on respective eye winks with significantly high accuracy. In order to avoid accuracy problems for sensitive sensor biasing hardware, a robust custom microcontroller-based data acquisition system is developed. Consequently, the physical size and cost of the overall Eye Touch system are considerably reduced while the power efficiency is improved. The results of the experimental analysis over numerous subjects clearly indicate that the proposed eye tracking system can classify eye winks with 98% accuracy, and attain an accurate gaze direction with an average angular error of about 0.93 °. Due to its lightweight structure, competitive accuracy and low-computational requirements relative to video-based eye tracking systems, the proposed system is a promising human-computer interface for both stationary and mobile eye tracking applications.

  16. A grid computing-based approach for the acceleration of simulations in cardiology.

    PubMed

    Alonso, José M; Ferrero, José M; Hernández, Vicente; Moltó, Germán; Saiz, Javier; Trénor, Beatriz

    2008-03-01

    This paper combines high-performance computing and grid computing technologies to accelerate multiple executions of a biomedical application that simulates the action potential propagation on cardiac tissues. First, a parallelization strategy was employed to accelerate the execution of simulations on a cluster of personal computers (PCs). Then, grid computing was employed to concurrently perform the multiple simulations that compose the cardiac case studies on the resources of a grid deployment, by means of a service-oriented approach. This way, biomedical experts are provided with a gateway to easily access a grid infrastructure for the execution of these research studies. Emphasis is stressed on the methodology employed. In order to assess the benefits of the grid, a cardiac case study, which analyzes the effects of premature stimulation on reentry generation during myocardial ischemia, has been carried out. The collaborative usage of a distributed computing infrastructure has reduced the time required for the execution of cardiac case studies, which allows, for example, to take more accurate decisions when evaluating the effects of new antiarrhythmic drugs on the electrical activity of the heart.

  17. Noncontrast computed tomography can predict the outcome of shockwave lithotripsy via accurate stone measurement and abdominal fat distribution determination.

    PubMed

    Geng, Jiun-Hung; Tu, Hung-Pin; Shih, Paul Ming-Chen; Shen, Jung-Tsung; Jang, Mei-Yu; Wu, Wen-Jen; Li, Ching-Chia; Chou, Yii-Her; Juan, Yung-Shun

    2015-01-01

    Urolithiasis is a common disease of the urinary system. Extracorporeal shockwave lithotripsy (SWL) has become one of the standard treatments for renal and ureteral stones; however, the success rates range widely and failure of stone disintegration may cause additional outlay, alternative procedures, and even complications. We used the data available from noncontrast abdominal computed tomography (NCCT) to evaluate the impact of stone parameters and abdominal fat distribution on calculus-free rates following SWL. We retrospectively reviewed 328 patients who had urinary stones and had undergone SWL from August 2012 to August 2013. All of them received pre-SWL NCCT; 1 month after SWL, radiography was arranged to evaluate the condition of the fragments. These patients were classified into stone-free group and residual stone group. Unenhanced computed tomography variables, including stone attenuation, abdominal fat area, and skin-to-stone distance (SSD) were analyzed. In all, 197 (60%) were classified as stone-free and 132 (40%) as having residual stone. The mean ages were 49.35 ± 13.22 years and 55.32 ± 13.52 years, respectively. On univariate analysis, age, stone size, stone surface area, stone attenuation, SSD, total fat area (TFA), abdominal circumference, serum creatinine, and the severity of hydronephrosis revealed statistical significance between these two groups. From multivariate logistic regression analysis, the independent parameters impacting SWL outcomes were stone size, stone attenuation, TFA, and serum creatinine. [Adjusted odds ratios and (95% confidence intervals): 9.49 (3.72-24.20), 2.25 (1.22-4.14), 2.20 (1.10-4.40), and 2.89 (1.35-6.21) respectively, all p < 0.05]. In the present study, stone size, stone attenuation, TFA and serum creatinine were four independent predictors for stone-free rates after SWL. These findings suggest that pretreatment NCCT may predict the outcomes after SWL. Consequently, we can use these predictors for selecting

  18. Understanding Plant Nitrogen Metabolism through Metabolomics and Computational Approaches

    PubMed Central

    Beatty, Perrin H.; Klein, Matthias S.; Fischer, Jeffrey J.; Lewis, Ian A.; Muench, Douglas G.; Good, Allen G.

    2016-01-01

    A comprehensive understanding of plant metabolism could provide a direct mechanism for improving nitrogen use efficiency (NUE) in crops. One of the major barriers to achieving this outcome is our poor understanding of the complex metabolic networks, physiological factors, and signaling mechanisms that affect NUE in agricultural settings. However, an exciting collection of computational and experimental approaches has begun to elucidate whole-plant nitrogen usage and provides an avenue for connecting nitrogen-related phenotypes to genes. Herein, we describe how metabolomics, computational models of metabolism, and flux balance analysis have been harnessed to advance our understanding of plant nitrogen metabolism. We introduce a model describing the complex flow of nitrogen through crops in a real-world agricultural setting and describe how experimental metabolomics data, such as isotope labeling rates and analyses of nutrient uptake, can be used to refine these models. In summary, the metabolomics/computational approach offers an exciting mechanism for understanding NUE that may ultimately lead to more effective crop management and engineered plants with higher yields. PMID:27735856

  19. Computational approaches to parameter estimation and model selection in immunology

    NASA Astrophysics Data System (ADS)

    Baker, C. T. H.; Bocharov, G. A.; Ford, J. M.; Lumb, P. M.; Norton, S. J.; Paul, C. A. H.; Junt, T.; Krebs, P.; Ludewig, B.

    2005-12-01

    One of the significant challenges in biomathematics (and other areas of science) is to formulate meaningful mathematical models. Our problem is to decide on a parametrized model which is, in some sense, most likely to represent the information in a set of observed data. In this paper, we illustrate the computational implementation of an information-theoretic approach (associated with a maximum likelihood treatment) to modelling in immunology.The approach is illustrated by modelling LCMV infection using a family of models based on systems of ordinary differential and delay differential equations. The models (which use parameters that have a scientific interpretation) are chosen to fit data arising from experimental studies of virus-cytotoxic T lymphocyte kinetics; the parametrized models that result are arranged in a hierarchy by the computation of Akaike indices. The practical illustration is used to convey more general insight. Because the mathematical equations that comprise the models are solved numerically, the accuracy in the computation has a bearing on the outcome, and we address this and other practical details in our discussion.

  20. F18-fluorodeoxyglucose-positron emission tomography and computed tomography is not accurate in preoperative staging of gastric cancer

    PubMed Central

    Ha, Tae Kyung; Choi, Yun Young; Song, Soon Young

    2011-01-01

    Purpose To investigate the clinical benefits of F18-fluorodeoxyglucose-positron emission tomography and computed tomography (18F-FDG-PET/CT) over multi-detector row CT (MDCT) in preoperative staging of gastric cancer. Methods FDG-PET/CT and MDCT were performed on 78 patients with gastric cancer pathologically diagnosed by endoscopy. The accuracy of radiologic staging retrospectively was compared to pathologic result after curative resection. Results Primary tumors were detected in 51 (65.4%) patients with 18F-FDG-PET/CT, and 47 (60.3%) patients with MDCT. Regarding detection of lymph node metastasis, the sensitivity of FDG-PET/CT was 51.5% with an accuracy of 71.8%, whereas those of MDCT were 69.7% and 69.2%, respectively. The sensitivity of 18F-FDG-PET/CT for a primary tumor with signet ring cell carcinoma was lower than that of 18F-FDG-PET/CT for a primary tumor with non-signet ring cell carcinoma (35.3% vs. 73.8%, P < 0.01). Conclusion Due to its low sensitivity, 18F-FDG-PET/CT alone shows no definite clinical benefit for prediction of lymph node metastasis in preoperative staging of gastric cancer. PMID:22066108

  1. Computational Study of the Reactions of Methanol with the Hydroperoxyl and Methyl Radicals. Part I: Accurate Thermochemistry and Barrier Heights

    SciTech Connect

    Alecu, I. M.; Truhlar, D. G.

    2011-04-07

    The reactions of CH3OH with the HO2 and CH3 radicals are important in the combustion of methanol and are prototypes for reactions of heavier alcohols in biofuels. The reaction energies and barrier heights for these reaction systems are computed with CCSD(T) theory extrapolated to the complete basis set limit using correlation-consistent basis sets, both augmented and unaugmented, and further refined by including a fully coupled treatment of the connected triple excitations, a second-order perturbative treatment of quadruple excitations (by CCSDT(2)Q), core–valence corrections, and scalar relativistic effects. It is shown that the M08-HX and M08-SO hybrid meta-GGA density functionals can achieve sub-kcal mol-1 agreement with the high-level ab initio results, identifying these functionals as important potential candidates for direct dynamics studies on the rates of these and homologous reaction systems.

  2. Accurate predictions of iron redox state in silicate glasses: A multivariate approach using X-ray absorption spectroscopy

    SciTech Connect

    Dyar, M. Darby; McCanta, Molly; Breves, Elly; Carey, C. J.; Lanzirotti, Antonio

    2016-03-01

    Pre-edge features in the K absorption edge of X-ray absorption spectra are commonly used to predict Fe3+ valence state in silicate glasses. However, this study shows that using the entire spectral region from the pre-edge into the extended X-ray absorption fine-structure region provides more accurate results when combined with multivariate analysis techniques. The least absolute shrinkage and selection operator (lasso) regression technique yields %Fe3+ values that are accurate to ±3.6% absolute when the full spectral region is employed. This method can be used across a broad range of glass compositions, is easily automated, and is demonstrated to yield accurate results from different synchrotrons. It will enable future studies involving X-ray mapping of redox gradients on standard thin sections at 1 × 1 μm pixel sizes.

  3. Accurate predictions of iron redox state in silicate glasses: A multivariate approach using X-ray absorption spectroscopy

    SciTech Connect

    Dyar, M. Darby; McCanta, Molly; Breves, Elly; Carey, C. J.; Lanzirotti, Antonio

    2016-03-01

    Pre-edge features in the K absorption edge of X-ray absorption spectra are commonly used to predict Fe3+ valence state in silicate glasses. However, this study shows that using the entire spectral region from the pre-edge into the extended X-ray absorption fine-structure region provides more accurate results when combined with multivariate analysis techniques. The least absolute shrinkage and selection operator (lasso) regression technique yields %Fe3+ values that are accurate to ±3.6% absolute when the full spectral region is employed. This method can be used across a broad range of glass compositions, is easily automated, and is demonstrated to yield accurate results from different synchrotrons. It will enable future studies involving X-ray mapping of redox gradients on standard thin sections at 1 × 1 μm pixel sizes.

  4. A GPU-computing Approach to Solar Stokes Profile Inversion

    NASA Astrophysics Data System (ADS)

    Harker, Brian J.; Mighell, Kenneth J.

    2012-09-01

    We present a new computational approach to the inversion of solar photospheric Stokes polarization profiles, under the Milne-Eddington model, for vector magnetography. Our code, named GENESIS, employs multi-threaded parallel-processing techniques to harness the computing power of graphics processing units (GPUs), along with algorithms designed to exploit the inherent parallelism of the Stokes inversion problem. Using a genetic algorithm (GA) engineered specifically for use with a GPU, we produce full-disk maps of the photospheric vector magnetic field from polarized spectral line observations recorded by the Synoptic Optical Long-term Investigations of the Sun (SOLIS) Vector Spectromagnetograph (VSM) instrument. We show the advantages of pairing a population-parallel GA with data-parallel GPU-computing techniques, and present an overview of the Stokes inversion problem, including a description of our adaptation to the GPU-computing paradigm. Full-disk vector magnetograms derived by this method are shown using SOLIS/VSM data observed on 2008 March 28 at 15:45 UT.

  5. Computing electronic structures: A new multiconfiguration approach for excited states

    NASA Astrophysics Data System (ADS)

    Cancès, Éric; Galicher, Hervé; Lewin, Mathieu

    2006-02-01

    We present a new method for the computation of electronic excited states of molecular systems. This method is based upon a recent theoretical definition of multiconfiguration excited states [due to one of us, see M. Lewin, Solutions of the multiconfiguration equations in quantum chemistry, Arch. Rat. Mech. Anal. 171 (2004) 83-114]. Our algorithm, dedicated to the computation of the first excited state, always converges to a stationary state of the multiconfiguration model, which can be interpreted as an approximate excited state of the molecule. The definition of this approximate excited state is variational. An interesting feature is that it satisfies a non-linear Hylleraas-Undheim-MacDonald type principle: the energy of the approximate excited state is an upper bound to the true excited state energy of the N-body Hamiltonian. To compute the first excited state, one has to deform paths on a manifold, like this is usually done in the search for transition states between reactants and products on potential energy surfaces. We propose here a general method for the deformation of paths which could also be useful in other settings. We also compare our method to other approaches used in Quantum Chemistry and give some explanation of the unsatisfactory behaviours which are sometimes observed when using the latters. Numerical results for the special case of two-electron systems are provided: we compute the first singlet excited state potential energy surface of the H2 molecule.

  6. Computational approaches in the design of synthetic receptors - A review.

    PubMed

    Cowen, Todd; Karim, Kal; Piletsky, Sergey

    2016-09-14

    The rational design of molecularly imprinted polymers (MIPs) has been a major contributor to their reputation as "plastic antibodies" - high affinity robust synthetic receptors which can be optimally designed, and produced for a much reduced cost than their biological equivalents. Computational design has become a routine procedure in the production of MIPs, and has led to major advances in functional monomer screening, selection of cross-linker and solvent, optimisation of monomer(s)-template ratio and selectivity analysis. In this review the various computational methods will be discussed with reference to all the published relevant literature since the end of 2013, with each article described by the target molecule, the computational approach applied (whether molecular mechanics/molecular dynamics, semi-empirical quantum mechanics, ab initio quantum mechanics (Hartree-Fock, Møller-Plesset, etc.) or DFT) and the purpose for which they were used. Detailed analysis is given to novel techniques including analysis of polymer binding sites, the use of novel screening programs and simulations of MIP polymerisation reaction. The further advances in molecular modelling and computational design of synthetic receptors in particular will have serious impact on the future of nanotechnology and biotechnology, permitting the further translation of MIPs into the realms of analytics and medical technology.

  7. Computing electronic structures: A new multiconfiguration approach for excited states

    SciTech Connect

    Cances, Eric . E-mail: cances@cermics.enpc.fr; Galicher, Herve . E-mail: galicher@cermics.enpc.fr; Lewin, Mathieu . E-mail: lewin@cermic.enpc.fr

    2006-02-10

    We present a new method for the computation of electronic excited states of molecular systems. This method is based upon a recent theoretical definition of multiconfiguration excited states [due to one of us, see M. Lewin, Solutions of the multiconfiguration equations in quantum chemistry, Arch. Rat. Mech. Anal. 171 (2004) 83-114]. Our algorithm, dedicated to the computation of the first excited state, always converges to a stationary state of the multiconfiguration model, which can be interpreted as an approximate excited state of the molecule. The definition of this approximate excited state is variational. An interesting feature is that it satisfies a non-linear Hylleraas-Undheim-MacDonald type principle: the energy of the approximate excited state is an upper bound to the true excited state energy of the N-body Hamiltonian. To compute the first excited state, one has to deform paths on a manifold, like this is usually done in the search for transition states between reactants and products on potential energy surfaces. We propose here a general method for the deformation of paths which could also be useful in other settings. We also compare our method to other approaches used in Quantum Chemistry and give some explanation of the unsatisfactory behaviours which are sometimes observed when using the latter. Numerical results for the special case of two-electron systems are provided: we compute the first singlet excited state potential energy surface of the H {sub 2} molecule.

  8. A GPU-COMPUTING APPROACH TO SOLAR STOKES PROFILE INVERSION

    SciTech Connect

    Harker, Brian J.; Mighell, Kenneth J. E-mail: mighell@noao.edu

    2012-09-20

    We present a new computational approach to the inversion of solar photospheric Stokes polarization profiles, under the Milne-Eddington model, for vector magnetography. Our code, named GENESIS, employs multi-threaded parallel-processing techniques to harness the computing power of graphics processing units (GPUs), along with algorithms designed to exploit the inherent parallelism of the Stokes inversion problem. Using a genetic algorithm (GA) engineered specifically for use with a GPU, we produce full-disk maps of the photospheric vector magnetic field from polarized spectral line observations recorded by the Synoptic Optical Long-term Investigations of the Sun (SOLIS) Vector Spectromagnetograph (VSM) instrument. We show the advantages of pairing a population-parallel GA with data-parallel GPU-computing techniques, and present an overview of the Stokes inversion problem, including a description of our adaptation to the GPU-computing paradigm. Full-disk vector magnetograms derived by this method are shown using SOLIS/VSM data observed on 2008 March 28 at 15:45 UT.

  9. The Cambridge Face Tracker: Accurate, Low Cost Measurement of Head Posture Using Computer Vision and Face Recognition Software.

    PubMed

    Thomas, Peter B M; Baltrušaitis, Tadas; Robinson, Peter; Vivian, Anthony J

    2016-09-01

    We validate a video-based method of head posture measurement. The Cambridge Face Tracker uses neural networks (constrained local neural fields) to recognize facial features in video. The relative position of these facial features is used to calculate head posture. First, we assess the accuracy of this approach against videos in three research databases where each frame is tagged with a precisely measured head posture. Second, we compare our method to a commercially available mechanical device, the Cervical Range of Motion device: four subjects each adopted 43 distinct head postures that were measured using both methods. The Cambridge Face Tracker achieved confident facial recognition in 92% of the approximately 38,000 frames of video from the three databases. The respective mean error in absolute head posture was 3.34°, 3.86°, and 2.81°, with a median error of 1.97°, 2.16°, and 1.96°. The accuracy decreased with more extreme head posture. Comparing The Cambridge Face Tracker to the Cervical Range of Motion Device gave correlation coefficients of 0.99 (P < 0.0001), 0.96 (P < 0.0001), and 0.99 (P < 0.0001) for yaw, pitch, and roll, respectively. The Cambridge Face Tracker performs well under real-world conditions and within the range of normally-encountered head posture. It allows useful quantification of head posture in real time or from precaptured video. Its performance is similar to that of a clinically validated mechanical device. It has significant advantages over other approaches in that subjects do not need to wear any apparatus, and it requires only low cost, easy-to-setup consumer electronics. Noncontact assessment of head posture allows more complete clinical assessment of patients, and could benefit surgical planning in future.

  10. The Cambridge Face Tracker: Accurate, Low Cost Measurement of Head Posture Using Computer Vision and Face Recognition Software

    PubMed Central

    Thomas, Peter B. M.; Baltrušaitis, Tadas; Robinson, Peter; Vivian, Anthony J.

    2016-01-01

    Purpose We validate a video-based method of head posture measurement. Methods The Cambridge Face Tracker uses neural networks (constrained local neural fields) to recognize facial features in video. The relative position of these facial features is used to calculate head posture. First, we assess the accuracy of this approach against videos in three research databases where each frame is tagged with a precisely measured head posture. Second, we compare our method to a commercially available mechanical device, the Cervical Range of Motion device: four subjects each adopted 43 distinct head postures that were measured using both methods. Results The Cambridge Face Tracker achieved confident facial recognition in 92% of the approximately 38,000 frames of video from the three databases. The respective mean error in absolute head posture was 3.34°, 3.86°, and 2.81°, with a median error of 1.97°, 2.16°, and 1.96°. The accuracy decreased with more extreme head posture. Comparing The Cambridge Face Tracker to the Cervical Range of Motion Device gave correlation coefficients of 0.99 (P < 0.0001), 0.96 (P < 0.0001), and 0.99 (P < 0.0001) for yaw, pitch, and roll, respectively. Conclusions The Cambridge Face Tracker performs well under real-world conditions and within the range of normally-encountered head posture. It allows useful quantification of head posture in real time or from precaptured video. Its performance is similar to that of a clinically validated mechanical device. It has significant advantages over other approaches in that subjects do not need to wear any apparatus, and it requires only low cost, easy-to-setup consumer electronics. Translational Relevance Noncontact assessment of head posture allows more complete clinical assessment of patients, and could benefit surgical planning in future. PMID:27730008

  11. An alternative approach for computing seismic response with accidental eccentricity

    NASA Astrophysics Data System (ADS)

    Fan, Xuanhua; Yin, Jiacong; Sun, Shuli; Chen, Pu

    2014-09-01

    Accidental eccentricity is a non-standard assumption for seismic design of tall buildings. Taking it into consideration requires reanalysis of seismic resistance, which requires either time consuming computation of natural vibration of eccentric structures or finding a static displacement solution by applying an approximated equivalent torsional moment for each eccentric case. This study proposes an alternative modal response spectrum analysis (MRSA) approach to calculate seismic responses with accidental eccentricity. The proposed approach, called the Rayleigh Ritz Projection-MRSA (RRP-MRSA), is developed based on MRSA and two strategies: (a) a RRP method to obtain a fast calculation of approximate modes of eccentric structures; and (b) an approach to assemble mass matrices of eccentric structures. The efficiency of RRP-MRSA is tested via engineering examples and compared with the standard MRSA (ST-MRSA) and one approximate method, i.e., the equivalent torsional moment hybrid MRSA (ETM-MRSA). Numerical results show that RRP-MRSA not only achieves almost the same precision as ST-MRSA, and is much better than ETM-MRSA, but is also more economical. Thus, RRP-MRSA can be in place of current accidental eccentricity computations in seismic design.

  12. Computational approaches for rational design of proteins with novel functionalities

    PubMed Central

    Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar; Kim, In-Won; Lee, Jung-Kul

    2012-01-01

    Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes. PMID:24688643

  13. Computational approaches for fragment-based and de novo design.

    PubMed

    Loving, Kathryn; Alberts, Ian; Sherman, Woody

    2010-01-01

    Fragment-based and de novo design strategies have been used in drug discovery for years. The methodologies for these strategies are typically discussed separately, yet the applications of these techniques overlap substantially. We present a review of various fragment-based discovery and de novo design protocols with an emphasis on successful applications in real-world drug discovery projects. Furthermore, we illustrate the strengths and weaknesses of the various approaches and discuss how one method can be used to complement another. We also discuss how the incorporation of experimental data as constraints in computational models can produce novel compounds that occupy unique areas in intellectual property (IP) space yet are biased toward the desired chemical property space. Finally, we present recent research results suggesting that computational tools applied to fragment-based discovery and de novo design can have a greater impact on the discovery process when coupled with the right experiments.

  14. Slide Star: An Approach to Videodisc/Computer Aided Instruction

    PubMed Central

    McEnery, Kevin W.

    1984-01-01

    One of medical education's primary goals is for the student to be proficient in the gross and microscopic identification of disease. The videodisc, with its storage capacity of up to 54,000 photomicrographs is ideally suited to assist in this educational process. “Slide Star” is a method of interactive instruction which is designed for use in any subject where it is essential to identify visual material. The instructional approach utilizes a computer controlled videodisc to display photomicrographs. In the demonstration program, these are slides of normal blood cells. The program is unique in that the instruction is created by the student's commands manipulating the photomicrograph data base. A prime feature is the use of computer generated multiple choice questions to reinforce the learning process.

  15. [Computer work and De Quervain's tenosynovitis: an evidence based approach].

    PubMed

    Gigante, M R; Martinotti, I; Cirla, P E

    2012-01-01

    The debate around the role of the work at personal computer as cause of De Quervain's Tenosynovitis was developed partially, without considering multidisciplinary available data. A systematic review of the literature, using an evidence-based approach, was performed. In disorders associated with the use of VDU, we must distinguish those at the upper limbs and among them those related to an overload. Experimental studies on the occurrence of De Quervain's Tenosynovitis are quite limited, as well as clinically are quite difficult to prove the professional etiology, considering the interference due to other activities of daily living or to the biological susceptibility (i.e. anatomical variability, sex, age, exercise). At present there is no evidence of any connection between De Quervain syndrome and time of use of the personal computer or keyboard, limited evidence of correlation is found with time using a mouse. No data are available regarding the use exclusively or predominantly for personal laptops or mobile "smart phone".

  16. A Computational Approach for Probabilistic Analysis of Water Impact Simulations

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Mason, Brian H.; Lyle, Karen H.

    2009-01-01

    NASA's development of new concepts for the Crew Exploration Vehicle Orion presents many similar challenges to those worked in the sixties during the Apollo program. However, with improved modeling capabilities, new challenges arise. For example, the use of the commercial code LS-DYNA, although widely used and accepted in the technical community, often involves high-dimensional, time consuming, and computationally intensive simulations. The challenge is to capture what is learned from a limited number of LS-DYNA simulations to develop models that allow users to conduct interpolation of solutions at a fraction of the computational time. This paper presents a description of the LS-DYNA model, a brief summary of the response surface techniques, the analysis of variance approach used in the sensitivity studies, equations used to estimate impact parameters, results showing conditions that might cause injuries, and concluding remarks.

  17. Staging of osteonecrosis of the jaw requires computed tomography for accurate definition of the extent of bony disease.

    PubMed

    Bedogni, Alberto; Fedele, Stefano; Bedogni, Giorgio; Scoletta, Matteo; Favia, Gianfranco; Colella, Giuseppe; Agrillo, Alessandro; Bettini, Giordana; Di Fede, Olga; Oteri, Giacomo; Fusco, Vittorio; Gabriele, Mario; Ottolenghi, Livia; Valsecchi, Stefano; Porter, Stephen; Petruzzi, Massimo; Arduino, Paolo; D'Amato, Salvatore; Ungari, Claudio; Fung Polly, Pok-Lam; Saia, Giorgia; Campisi, Giuseppina

    2014-09-01

    Management of osteonecrosis of the jaw associated with antiresorptive agents is challenging, and outcomes are unpredictable. The severity of disease is the main guide to management, and can help to predict prognosis. Most available staging systems for osteonecrosis, including the widely-used American Association of Oral and Maxillofacial Surgeons (AAOMS) system, classify severity on the basis of clinical and radiographic findings. However, clinical inspection and radiography are limited in their ability to identify the extent of necrotic bone disease compared with computed tomography (CT). We have organised a large multicentre retrospective study (known as MISSION) to investigate the agreement between the AAOMS staging system and the extent of osteonecrosis of the jaw (focal compared with diffuse involvement of bone) as detected on CT. We studied 799 patients with detailed clinical phenotyping who had CT images taken. Features of diffuse bone disease were identified on CT within all AAOMS stages (20%, 8%, 48%, and 24% of patients in stages 0, 1, 2, and 3, respectively). Of the patients classified as stage 0, 110/192 (57%) had diffuse disease on CT, and about 1 in 3 with CT evidence of diffuse bone disease was misclassified by the AAOMS system as having stages 0 and 1 osteonecrosis. In addition, more than a third of patients with AAOMS stage 2 (142/405, 35%) had focal bone disease on CT. We conclude that the AAOMS staging system does not correctly identify the extent of bony disease in patients with osteonecrosis of the jaw. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  19. Approaches to Computer Modeling of Phosphate Hide-Out.

    DTIC Science & Technology

    1984-06-28

    phosphate acts as a buffer to keep pH at a value above which acid corrosion occurs . and below which caustic corrosion becomes significant. Difficulties are...ionization of dihydrogen phosphate : HIPO - + + 1PO, K (B-7) H+ + - £Iao 1/1, (B-8) H , PO4 - + O- - H0 4 + H20 K/Kw (0-9) 19 * Such zero heat...OF STANDARDS-1963-A +. .0 0 0 9t~ - 4 NRL Memorandum Report 5361 4 Approaches to Computer Modeling of Phosphate Hide-Out K. A. S. HARDY AND J. C

  20. Physiologically based computational approach to camouflage and masking patterns

    NASA Astrophysics Data System (ADS)

    Irvin, Gregg E.; Dowler, Michael G.

    1992-09-01

    A computational system was developed to integrate both Fourier image processing techniques and biologically based image processing techniques. The Fourier techniques allow the spatially global manipulation of phase and amplitude spectra. The biologically based techniques allow for spatially localized manipulation of phase, amplitude and orientation independently on multiple spatial frequency scales. These techniques combined with a large variety of basic image processing functions allow for a versatile and systematic approach to be taken toward the development of specialized patterning and visual textures. Current applications involve research for the development of 2-dimensional spatial patterning that can function as effective camouflage patterns and masking patterns for the human visual system.

  1. A pencil beam approach to proton computed tomography

    SciTech Connect

    Rescigno, Regina Bopp, Cécile; Rousseau, Marc; Brasse, David

    2015-11-15

    Purpose: A new approach to proton computed tomography (pCT) is presented. In this approach, protons are not tracked one-by-one but a beam of particles is considered instead. The elements of the pCT reconstruction problem (residual energy and path) are redefined on the basis of this new approach. An analytical image reconstruction algorithm applicable to this scenario is also proposed. Methods: The pencil beam (PB) and its propagation in matter were modeled by making use of the generalization of the Fermi–Eyges theory to account for multiple Coulomb scattering (MCS). This model was integrated into the pCT reconstruction problem, allowing the definition of the mean beam path concept similar to the most likely path (MLP) used in the single-particle approach. A numerical validation of the model was performed. The algorithm of filtered backprojection along MLPs was adapted to the beam-by-beam approach. The acquisition of a perfect proton scan was simulated and the data were used to reconstruct images of the relative stopping power of the phantom with the single-proton and beam-by-beam approaches. The resulting images were compared in a qualitative way. Results: The parameters of the modeled PB (mean and spread) were compared to Monte Carlo results in order to validate the model. For a water target, good agreement was found for the mean value of the distributions. As far as the spread is concerned, depth-dependent discrepancies as large as 2%–3% were found. For a heterogeneous phantom, discrepancies in the distribution spread ranged from 6% to 8%. The image reconstructed with the beam-by-beam approach showed a high level of noise compared to the one reconstructed with the classical approach. Conclusions: The PB approach to proton imaging may allow technical challenges imposed by the current proton-by-proton method to be overcome. In this framework, an analytical algorithm is proposed. Further work will involve a detailed study of the performances and limitations of

  2. Exploiting Self-organization in Bioengineered Systems: A Computational Approach.

    PubMed

    Davis, Delin; Doloman, Anna; Podgorski, Gregory J; Vargis, Elizabeth; Flann, Nicholas S

    2017-01-01

    The productivity of bioengineered cell factories is limited by inefficiencies in nutrient delivery and waste and product removal. Current solution approaches explore changes in the physical configurations of the bioreactors. This work investigates the possibilities of exploiting self-organizing vascular networks to support producer cells within the factory. A computational model simulates de novo vascular development of endothelial-like cells and the resultant network functioning to deliver nutrients and extract product and waste from the cell culture. Microbial factories with vascular networks are evaluated for their scalability, robustness, and productivity compared to the cell factories without a vascular network. Initial studies demonstrate that at least an order of magnitude increase in production is possible, the system can be scaled up, and the self-organization of an efficient vascular network is robust. The work suggests that bioengineered multicellularity may offer efficiency improvements difficult to achieve with physical engineering approaches.

  3. Identification of Protein–Excipient Interaction Hotspots Using Computational Approaches

    PubMed Central

    Barata, Teresa S.; Zhang, Cheng; Dalby, Paul A.; Brocchini, Steve; Zloh, Mire

    2016-01-01

    Protein formulation development relies on the selection of excipients that inhibit protein–protein interactions preventing aggregation. Empirical strategies involve screening many excipient and buffer combinations using force degradation studies. Such methods do not readily provide information on intermolecular interactions responsible for the protective effects of excipients. This study describes a molecular docking approach to screen and rank interactions allowing for the identification of protein–excipient hotspots to aid in the selection of excipients to be experimentally screened. Previously published work with Drosophila Su(dx) was used to develop and validate the computational methodology, which was then used to determine the formulation hotspots for Fab A33. Commonly used excipients were examined and compared to the regions in Fab A33 prone to protein–protein interactions that could lead to aggregation. This approach could provide information on a molecular level about the protective interactions of excipients in protein formulations to aid the more rational development of future formulations. PMID:27258262

  4. A Computational Approach for Identifying Synergistic Drug Combinations

    PubMed Central

    Gayvert, Kaitlyn M.; Aly, Omar; Bosenberg, Marcus W.; Stern, David F.; Elemento, Olivier

    2017-01-01

    A promising alternative to address the problem of acquired drug resistance is to rely on combination therapies. Identification of the right combinations is often accomplished through trial and error, a labor and resource intensive process whose scale quickly escalates as more drugs can be combined. To address this problem, we present a broad computational approach for predicting synergistic combinations using easily obtainable single drug efficacy, no detailed mechanistic understanding of drug function, and limited drug combination testing. When applied to mutant BRAF melanoma, we found that our approach exhibited significant predictive power. Additionally, we validated previously untested synergy predictions involving anticancer molecules. As additional large combinatorial screens become available, this methodology could prove to be impactful for identification of drug synergy in context of other types of cancers. PMID:28085880

  5. Exploiting Self-organization in Bioengineered Systems: A Computational Approach

    PubMed Central

    Davis, Delin; Doloman, Anna; Podgorski, Gregory J.; Vargis, Elizabeth; Flann, Nicholas S.

    2017-01-01

    The productivity of bioengineered cell factories is limited by inefficiencies in nutrient delivery and waste and product removal. Current solution approaches explore changes in the physical configurations of the bioreactors. This work investigates the possibilities of exploiting self-organizing vascular networks to support producer cells within the factory. A computational model simulates de novo vascular development of endothelial-like cells and the resultant network functioning to deliver nutrients and extract product and waste from the cell culture. Microbial factories with vascular networks are evaluated for their scalability, robustness, and productivity compared to the cell factories without a vascular network. Initial studies demonstrate that at least an order of magnitude increase in production is possible, the system can be scaled up, and the self-organization of an efficient vascular network is robust. The work suggests that bioengineered multicellularity may offer efficiency improvements difficult to achieve with physical engineering approaches. PMID:28503548

  6. Computational inference of gene regulatory networks: Approaches, limitations and opportunities.

    PubMed

    Banf, Michael; Rhee, Seung Y

    2017-01-01

    Gene regulatory networks lie at the core of cell function control. In E. coli and S. cerevisiae, the study of gene regulatory networks has led to the discovery of regulatory mechanisms responsible for the control of cell growth, differentiation and responses to environmental stimuli. In plants, computational rendering of gene regulatory networks is gaining momentum, thanks to the recent availability of high-quality genomes and transcriptomes and development of computational network inference approaches. Here, we review current techniques, challenges and trends in gene regulatory network inference and highlight challenges and opportunities for plant science. We provide plant-specific application examples to guide researchers in selecting methodologies that suit their particular research questions. Given the interdisciplinary nature of gene regulatory network inference, we tried to cater to both biologists and computer scientists to help them engage in a dialogue about concepts and caveats in network inference. Specifically, we discuss problems and opportunities in heterogeneous data integration for eukaryotic organisms and common caveats to be considered during network model evaluation. This article is part of a Special Issue entitled: Plant Gene Regulatory Mechanisms and Networks, edited by Dr. Erich Grotewold and Dr. Nathan Springer.

  7. Learning about modes of speciation by computational approaches.

    PubMed

    Becquet, Céline; Przeworski, Molly

    2009-10-01

    How often do the early stages of speciation occur in the presence of gene flow? To address this enduring question, a number of recent papers have used computational approaches, estimating parameters of simple divergence models from multilocus polymorphism data collected in closely related species. Applications to a variety of species have yielded extensive evidence for migration, with the results interpreted as supporting the widespread occurrence of parapatric speciation. Here, we conduct a simulation study to assess the reliability of such inferences, using a program that we recently developed MCMC estimation of the isolation-migration model allowing for recombination (MIMAR) as well as the program isolation-migration (IM) of Hey and Nielsen (2004). We find that when one of many assumptions of the isolation-migration model is violated, the methods tend to yield biased estimates of the parameters, potentially lending spurious support for allopatric or parapatric divergence. More generally, our results highlight the difficulty in drawing inferences about modes of speciation from the existing computational approaches alone.

  8. Computational Approach to Dendritic Spine Taxonomy and Shape Transition Analysis

    PubMed Central

    Bokota, Grzegorz; Magnowska, Marta; Kuśmierczyk, Tomasz; Łukasik, Michał; Roszkowska, Matylda; Plewczynski, Dariusz

    2016-01-01

    The common approach in morphological analysis of dendritic spines of mammalian neuronal cells is to categorize spines into subpopulations based on whether they are stubby, mushroom, thin, or filopodia shaped. The corresponding cellular models of synaptic plasticity, long-term potentiation, and long-term depression associate the synaptic strength with either spine enlargement or spine shrinkage. Although a variety of automatic spine segmentation and feature extraction methods were developed recently, no approaches allowing for an automatic and unbiased distinction between dendritic spine subpopulations and detailed computational models of spine behavior exist. We propose an automatic and statistically based method for the unsupervised construction of spine shape taxonomy based on arbitrary features. The taxonomy is then utilized in the newly introduced computational model of behavior, which relies on transitions between shapes. Models of different populations are compared using supplied bootstrap-based statistical tests. We compared two populations of spines at two time points. The first population was stimulated with long-term potentiation, and the other in the resting state was used as a control. The comparison of shape transition characteristics allowed us to identify the differences between population behaviors. Although some extreme changes were observed in the stimulated population, statistically significant differences were found only when whole models were compared. The source code of our software is freely available for non-commercial use1. Contact: d.plewczynski@cent.uw.edu.pl. PMID:28066226

  9. Crowd Computing as a Cooperation Problem: An Evolutionary Approach

    NASA Astrophysics Data System (ADS)

    Christoforou, Evgenia; Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A.; Sánchez, Angel

    2013-05-01

    Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive—conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.

  10. Computational Approaches for Translational Oncology: Concepts and Patents.

    PubMed

    Scianna, Marco; Munaron, Luca

    2016-01-01

    Cancer is a heterogeneous disease, which is based on an intricate network of processes at different spatiotemporal scales, from the genome to the tissue level. Hence the necessity for the biomedical and pharmaceutical research to work in a multiscale fashion. In this respect, a significant help derives from the collaboration with theoretical sciences. Mathematical models can in fact provide insights into tumor-related processes and support clinical oncologists in the design of treatment regime, dosage, schedule and toxicity. The main objective of this article is to review the recent computational-based patents which tackle some relevant aspects of tumor treatment. We first analyze a series of patents concerning the purposing the purposing or repurposing of anti-tumor compounds. These approaches rely on pharmacokinetics and pharmacodynamics modules, that incorporate data obtained in the different phases of clinical trials. Similar methods are also at the basis of other patents included in this paper, which deal with treatment optimization, in terms of maximizing therapy efficacy while minimizing side effects on the host. A group of patents predicting drug response and tumor evolution by the use of kinetics graphs are commented as well. We finally focus on patents that implement informatics tools to map and screen biological, medical, and pharmaceutical knowledge. Despite promising aspects (and an increasing amount of the relative literature), we found few computational-based patents: there is still a significant effort to do for allowing modelling approaches to become an integral component of the pharmaceutical research.

  11. Computational approaches to understand cardiac electrophysiology and arrhythmias

    PubMed Central

    Roberts, Byron N.; Yang, Pei-Chi; Behrens, Steven B.; Moreno, Jonathan D.

    2012-01-01

    Cardiac rhythms arise from electrical activity generated by precisely timed opening and closing of ion channels in individual cardiac myocytes. These impulses spread throughout the cardiac muscle to manifest as electrical waves in the whole heart. Regularity of electrical waves is critically important since they signal the heart muscle to contract, driving the primary function of the heart to act as a pump and deliver blood to the brain and vital organs. When electrical activity goes awry during a cardiac arrhythmia, the pump does not function, the brain does not receive oxygenated blood, and death ensues. For more than 50 years, mathematically based models of cardiac electrical activity have been used to improve understanding of basic mechanisms of normal and abnormal cardiac electrical function. Computer-based modeling approaches to understand cardiac activity are uniquely helpful because they allow for distillation of complex emergent behaviors into the key contributing components underlying them. Here we review the latest advances and novel concepts in the field as they relate to understanding the complex interplay between electrical, mechanical, structural, and genetic mechanisms during arrhythmia development at the level of ion channels, cells, and tissues. We also discuss the latest computational approaches to guiding arrhythmia therapy. PMID:22886409

  12. Scalable, massively parallel approaches to upstream drainage area computation

    NASA Astrophysics Data System (ADS)

    Richardson, A.; Hill, C. N.; Perron, T.

    2011-12-01

    Accumulated drainage area maps of large regions are required for several applications. Among these are assessments of regional patterns of flow and sediment routing, high-resolution landscape evolution models in which drainage basin geometry evolves with time, and surveys of the characteristics of river basins that drain to continental margins. The computation of accumulated drainage areas is accomplished by inferring the vector field of drainage flow directions from a two-dimensional digital elevation map, and then computing the area that drains to each tile. From this map of elevations we can compute the integrated, upstream area that drains to each tile of the map. Generally this last step is done with a recursive algorithm, that accumulates upstream areas sequentially. The inherently serial nature of this restricts the number of tiles that can be included, thereby limiting the resolution of continental-size domains. This is because of the requirements of both memory, which will rise proportionally to the number of tiles, N, and computing time, which is O(N2). The fundamental sequential property of this approach prohibits effective use of large scale parallelism. An alternate method of calculating accumulated drainage area from drainage direction data can be arrived at by reformulating the problem as the solution of a system of simultaneous linear equations. The equations define the relation that the total upslope area of a particular tile is the sum of all the upslope areas for tiles immediately adjacent to that tile that drain to it, and the tile's own area. Solving these equations amounts to finding the solution of a sparse, nine-diagonal matrix operating on a vector for a right-hand-side that is simply the individual tile areas and where the diagonals of the matrix are determined by the landscape geometry. We show how an iterative method, Bi-CGSTAB, can be used to solve this problem in a scalable, massively parallel manner. However, this introduces

  13. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    NASA Astrophysics Data System (ADS)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  14. Computer-Assisted Drug Formulation Design: Novel Approach in Drug Delivery.

    PubMed

    Metwally, Abdelkader A; Hathout, Rania M

    2015-08-03

    We hypothesize that, by using several chemo/bio informatics tools and statistical computational methods, we can study and then predict the behavior of several drugs in model nanoparticulate lipid and polymeric systems. Accordingly, two different matrices comprising tripalmitin, a core component of solid lipid nanoparticles (SLN), and PLGA were first modeled using molecular dynamics simulation, and then the interaction of drugs with these systems was studied by means of computing the free energy of binding using the molecular docking technique. These binding energies were hence correlated with the loadings of these drugs in the nanoparticles obtained experimentally from the available literature. The obtained relations were verified experimentally in our laboratory using curcumin as a model drug. Artificial neural networks were then used to establish the effect of the drugs' molecular descriptors on the binding energies and hence on the drug loading. The results showed that the used soft computing methods can provide an accurate method for in silico prediction of drug loading in tripalmitin-based and PLGA nanoparticulate systems. These results have the prospective of being applied to other nano drug-carrier systems, and this integrated statistical and chemo/bio informatics approach offers a new toolbox to the formulation science by proposing what we present as computer-assisted drug formulation design (CADFD).

  15. A Resampling Based Approach to Optimal Experimental Design for Computer Analysis of a Complex System

    SciTech Connect

    Rutherford, Brian

    1999-08-04

    The investigation of a complex system is often performed using computer generated response data supplemented by system and component test results where possible. Analysts rely on an efficient use of limited experimental resources to test the physical system, evaluate the models and to assure (to the extent possible) that the models accurately simulate the system order investigation. The general problem considered here is one where only a restricted number of system simulations (or physical tests) can be performed to provide additional data necessary to accomplish the project objectives. The levels of variables used for defining input scenarios, for setting system parameters and for initializing other experimental options must be selected in an efficient way. The use of computer algorithms to support experimental design in complex problems has been a topic of recent research in the areas of statistics and engineering. This paper describes a resampling based approach to form dating this design. An example is provided illustrating in two dimensions how the algorithm works and indicating its potential on larger problems. The results show that the proposed approach has characteristics desirable of an algorithmic approach on the simple examples. Further experimentation is needed to evaluate its performance on larger problems.

  16. Can a numerically stable subgrid-scale model for turbulent flow computation be ideally accurate?: a preliminary theoretical study for the Gaussian filtered Navier-Stokes equations.

    PubMed

    Ida, Masato; Taniguchi, Nobuyuki

    2003-09-01

    This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.

  17. Automated Approach to Very High-Order Aeroacoustic Computations. Revision

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2001-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  18. A New Approach on Computing Free Core Nutation

    NASA Astrophysics Data System (ADS)

    Zhang, Mian; Huang, Chengling

    2015-04-01

    Free core nutation (FCN) is a rotational modes of the earth related to non-alignment of the rotation axis of the core and of the mantle. FCN period by traditional theoretical methods is near 460 days with PREM, while the precise observations (VLBI + SG tides) say it should be near 430 days. In order to fill this big gap, astronomers and geophysicists give various assumptions, e.g., increasing core-mantle-boundary (CMB) flattening by about 5%, a strong coupling between nutation and geomagnetic field near CMB, viscous coupling, or topographical coupling etc. Do we really need these unproved assumptions? or is it only the problem of these traditional theoretical methods themselves? Earth models (e.g. PREM) provide accurate and robust profiles of physical parameters, like density and Lame parameters, but their radial derivatives, which are also used in all traditional methods to calculate normal modes (e.g.. FCN), nutation and tides of non-rigid earth theoretically, are not so trustable as the parameters themselves. A new stratified Galerkin method is proposed and applied to the computation of rotational modes, to avoid these problems. This new method can solve not only one order ellipsoid but also irregular asymmetric 3D earth model. Our primary result of the FCN period is 435 sidereal days.

  19. Understanding auditory distance estimation by humpback whales: a computational approach.

    PubMed

    Mercado, E; Green, S R; Schneider, J N

    2008-02-01

    Ranging, the ability to judge the distance to a sound source, depends on the presence of predictable patterns of attenuation. We measured long-range sound propagation in coastal waters to assess whether humpback whales might use frequency degradation cues to range singing whales. Two types of neural networks, a multi-layer and a single-layer perceptron, were trained to classify recorded sounds by distance traveled based on their frequency content. The multi-layer network successfully classified received sounds, demonstrating that the distorting effects of underwater propagation on frequency content provide sufficient cues to estimate source distance. Normalizing received sounds with respect to ambient noise levels increased the accuracy of distance estimates by single-layer perceptrons, indicating that familiarity with background noise can potentially improve a listening whale's ability to range. To assess whether frequency patterns predictive of source distance were likely to be perceived by whales, recordings were pre-processed using a computational model of the humpback whale's peripheral auditory system. Although signals processed with this model contained less information than the original recordings, neural networks trained with these physiologically based representations estimated source distance more accurately, suggesting that listening whales should be able to range singers using distance-dependent changes in frequency content.

  20. A computational approach to studying ageing at the individual level

    PubMed Central

    Mourão, Márcio A.; Schnell, Santiago; Pletcher, Scott D.

    2016-01-01

    The ageing process is actively regulated throughout an organism's life, but studying the rate of ageing in individuals is difficult with conventional methods. Consequently, ageing studies typically make biological inference based on population mortality rates, which often do not accurately reflect the probabilities of death at the individual level. To study the relationship between individual and population mortality rates, we integrated in vivo switch experiments with in silico stochastic simulations to elucidate how carefully designed experiments allow key aspects of individual ageing to be deduced from group mortality measurements. As our case study, we used the recent report demonstrating that pheromones of the opposite sex decrease lifespan in Drosophila melanogaster by reversibly increasing population mortality rates. We showed that the population mortality reversal following pheromone removal was almost surely occurring in individuals, albeit more slowly than suggested by population measures. Furthermore, heterogeneity among individuals due to the inherent stochasticity of behavioural interactions skewed population mortality rates in middle-age away from the individual-level trajectories of which they are comprised. This article exemplifies how computational models function as important predictive tools for designing wet-laboratory experiments to use population mortality rates to understand how genetic and environmental manipulations affect ageing in the individual. PMID:26865300

  1. a Holistic Approach for Inspection of Civil Infrastructures Based on Computer Vision Techniques

    NASA Astrophysics Data System (ADS)

    Stentoumis, C.; Protopapadakis, E.; Doulamis, A.; Doulamis, N.

    2016-06-01

    In this work, it is examined the 2D recognition and 3D modelling of concrete tunnel cracks, through visual cues. At the time being, the structural integrity inspection of large-scale infrastructures is mainly performed through visual observations by human inspectors, who identify structural defects, rate them and, then, categorize their severity. The described approach targets at minimum human intervention, for autonomous inspection of civil infrastructures. The shortfalls of existing approaches in crack assessment are being addressed by proposing a novel detection scheme. Although efforts have been made in the field, synergies among proposed techniques are still missing. The holistic approach of this paper exploits the state of the art techniques of pattern recognition and stereo-matching, in order to build accurate 3D crack models. The innovation lies in the hybrid approach for the CNN detector initialization, and the use of the modified census transformation for stereo matching along with a binary fusion of two state-of-the-art optimization schemes. The described approach manages to deal with images of harsh radiometry, along with severe radiometric differences in the stereo pair. The effectiveness of this workflow is evaluated on a real dataset gathered in highway and railway tunnels. What is promising is that the computer vision workflow described in this work can be transferred, with adaptations of course, to other infrastructure such as pipelines, bridges and large industrial facilities that are in the need of continuous state assessment during their operational life cycle.

  2. Toward an accurate quantification in atom probe tomography reconstruction by correlative electron tomography approach on nanoporous materials.

    PubMed

    Mouton, Isabelle; Printemps, Tony; Grenier, Adeline; Gambacorti, Narciso; Pinna, Elisa; Tiddia, Mariavitalia; Vacca, Annalisa; Mula, Guido

    2017-11-01

    In this contribution, we propose a protocol for analysis and accurate reconstruction of nanoporous materials by atom probe tomography (APT). The existence of several holes in porous materials makes both the direct APT analysis and reconstruction almost inaccessible. In the past, a solution has been proposed by filling pores with electron beam-induced deposition. Here, we present an alternative solution using an electro-chemical method allowing to fill even small and dense pores, making APT analysis possible. Concerning the 3D reconstruction, the microstructural features observed by electron tomography are used to finely calibrate the APT reconstruction parameters. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Computational Approaches for Microalgal Biofuel Optimization: A Review

    PubMed Central

    Chaiboonchoe, Amphun

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research. PMID:25309916

  4. Computational approaches for microalgal biofuel optimization: a review.

    PubMed

    Koussa, Joseph; Chaiboonchoe, Amphun; Salehi-Ashtiani, Kourosh

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research.

  5. Suggested Approaches to the Measurement of Computer Anxiety.

    ERIC Educational Resources Information Center

    Toris, Carol

    Psychologists can gain insight into human behavior by examining what people feel about, know about, and do with, computers. Two extreme reactions to computers are computer phobia, or anxiety, and computer addiction, or "hacking". A four-part questionnaire was developed to measure computer anxiety. The first part is a projective technique which…

  6. A technique for evaluating bone ingrowth into 3D printed, porous Ti6Al4V implants accurately using X-ray micro-computed tomography and histomorphometry.

    PubMed

    Palmquist, Anders; Shah, Furqan A; Emanuelsson, Lena; Omar, Omar; Suska, Felicia

    2017-03-01

    This paper investigates the application of X-ray micro-computed tomography (micro-CT) to accurately evaluate bone formation within 3D printed, porous Ti6Al4V implants manufactured using Electron Beam Melting (EBM), retrieved after six months of healing in sheep femur and tibia. All samples were scanned twice (i.e., before and after resin embedding), using fast, low-resolution scans (Skyscan 1172; Bruker micro-CT, Kontich, Belgium), and were analysed by 2D and 3D morphometry. The main questions posed were: (i) Can low resolution, fast scans provide morphometric data of bone formed inside (and around) metal implants with a complex, open-pore architecture?, (ii) Can micro-CT be used to accurately quantify both the bone area (BA) and bone-implant contact (BIC)?, (iii) What degree of error is introduced in the quantitative data by varying the threshold values?, and (iv) Does resin embedding influence the accuracy of the analysis? To validate the accuracy of micro-CT measurements, each data set was correlated with a corresponding centrally cut histological section. The results show that quantitative histomorphometry corresponds strongly with 3D measurements made by micro-CT, where a high correlation exists between the two techniques for bone area/volume measurements around and inside the porous network. On the contrary, the direct bone-implant contact is challenging to estimate accurately or reproducibly. Large errors may be introduced in micro-CT measurements when segmentation is performed without calibrating the data set against a corresponding histological section. Generally, the bone area measurement is strongly influenced by the lower threshold limit, while the upper threshold limit has little or no effect. Resin embedding does not compromise the accuracy of micro-CT measurements, although there is a change in the contrast distributions and optimisation of the threshold ranges is required.

  7. A machine-learning approach for computation of fractional flow reserve from coronary computed tomography.

    PubMed

    Itu, Lucian; Rapaka, Saikiran; Passerini, Tiziano; Georgescu, Bogdan; Schwemmer, Chris; Schoebinger, Max; Flohr, Thomas; Sharma, Puneet; Comaniciu, Dorin

    2016-07-01

    Fractional flow reserve (FFR) is a functional index quantifying the severity of coronary artery lesions and is clinically obtained using an invasive, catheter-based measurement. Recently, physics-based models have shown great promise in being able to noninvasively estimate FFR from patient-specific anatomical information, e.g., obtained from computed tomography scans of the heart and the coronary arteries. However, these models have high computational demand, limiting their clinical adoption. In this paper, we present a machine-learning-based model for predicting FFR as an alternative to physics-based approaches. The model is trained on a large database of synthetically generated coronary anatomies, where the target values are computed using the physics-based model. The trained model predicts FFR at each point along the centerline of the coronary tree, and its performance was assessed by comparing the predictions against physics-based computations and against invasively measured FFR for 87 patients and 125 lesions in total. Correlation between machine-learning and physics-based predictions was excellent (0.9994, P < 0.001), and no systematic bias was found in Bland-Altman analysis: mean difference was -0.00081 ± 0.0039. Invasive FFR ≤ 0.80 was found in 38 lesions out of 125 and was predicted by the machine-learning algorithm with a sensitivity of 81.6%, a specificity of 83.9%, and an accuracy of 83.2%. The correlation was 0.729 (P < 0.001). Compared with the physics-based computation, average execution time was reduced by more than 80 times, leading to near real-time assessment of FFR. Average execution time went down from 196.3 ± 78.5 s for the CFD model to ∼2.4 ± 0.44 s for the machine-learning model on a workstation with 3.4-GHz Intel i7 8-core processor.

  8. Creation of an idealized nasopharynx geometry for accurate computational fluid dynamics simulations of nasal airflow in patient-specific models lacking the nasopharynx anatomy.

    PubMed

    A T Borojeni, Azadeh; Frank-Ito, Dennis O; Kimbell, Julia S; Rhee, John S; Garcia, Guilherme J M

    2016-08-15

    Virtual surgery planning based on computational fluid dynamics (CFD) simulations has the potential to improve surgical outcomes for nasal airway obstruction patients, but the benefits of virtual surgery planning must outweigh the risks of radiation exposure. Cone beam computed tomography (CT) scans represent an attractive imaging modality for virtual surgery planning due to lower costs and lower radiation exposures compared with conventional CT scans. However, to minimize the radiation exposure, the cone beam CT sinusitis protocol sometimes images only the nasal cavity, excluding the nasopharynx. The goal of this study was to develop an idealized nasopharynx geometry for accurate representation of outlet boundary conditions when the nasopharynx geometry is unavailable. Anatomically accurate models of the nasopharynx created from 30 CT scans were intersected with planes rotated at different angles to obtain an average geometry. Cross sections of the idealized nasopharynx were approximated as ellipses with cross-sectional areas and aspect ratios equal to the average in the actual patient-specific models. CFD simulations were performed to investigate whether nasal airflow patterns were affected when the CT-based nasopharynx was replaced by the idealized nasopharynx in 10 nasal airway obstruction patients. Despite the simple form of the idealized geometry, all biophysical variables (nasal resistance, airflow rate, and heat fluxes) were very similar in the idealized vs patient-specific models. The results confirmed the expectation that the nasopharynx geometry has a minimal effect in the nasal airflow patterns during inspiration. The idealized nasopharynx geometry will be useful in future CFD studies of nasal airflow based on medical images that exclude the nasopharynx.

  9. Computational Diagnostic: A Novel Approach to View Medical Data.

    SciTech Connect

    Mane, K. K.; Börner, K.

    2007-01-01

    A transition from traditional paper-based medical records to electronic health record is largely underway. The use of electronic records offers tremendous potential to personalize patient diagnosis and treatment. In this paper, we discuss a computational diagnostic tool that uses digital medical records to help doctors gain better insight about a patient's medical condition. The paper details different interactive features of the tool which offer potential to practice evidence-based medicine and advance patient diagnosis practices. The healthcare industry is a constantly evolving domain. Research from this domain is often translated into better understanding of different medical conditions. This new knowledge often contributes towards improved diagnosis and treatment solutions for patients. But the healthcare industry lags behind to seek immediate benefits of the new knowledge as it still adheres to the traditional paper-based approach to keep track of medical records. However recently we notice a drive that promotes a transition towards electronic health record (EHR). An EHR stores patient medical records in digital format and offers potential to replace the paper health records. Earlier attempts of an EHR replicated the paper layout on the screen, representation of medical history of a patient in a graphical time-series format, interactive visualization with 2D/3D generated images from an imaging device. But an EHR can be much more than just an 'electronic view' of the paper record or a collection of images from an imaging device. In this paper, we present an EHR called 'Computational Diagnostic Tool', that provides a novel computational approach to look at patient medical data. The developed EHR system is knowledge driven and acts as clinical decision support tool. The EHR tool provides two visual views of the medical data. Dynamic interaction with data is supported to help doctors practice evidence-based decisions and make judicious choices about patient

  10. Computational approaches to substrate-based cell motility

    NASA Astrophysics Data System (ADS)

    Ziebert, Falko; Aranson, Igor S.

    2016-07-01

    Substrate-based crawling motility of eukaryotic cells is essential for many biological functions, both in developing and mature organisms. Motility dysfunctions are involved in several life-threatening pathologies such as cancer and metastasis. Motile cells are also a natural realisation of active, self-propelled 'particles', a popular research topic in nonequilibrium physics. Finally, from the materials perspective, assemblies of motile cells and evolving tissues constitute a class of adaptive self-healing materials that respond to the topography, elasticity and surface chemistry of the environment and react to external stimuli. Although a comprehensive understanding of substrate-based cell motility remains elusive, progress has been achieved recently in its modelling on the whole-cell level. Here we survey the most recent advances in computational approaches to cell movement and demonstrate how these models improve our understanding of complex self-organised systems such as living cells.

  11. Computer Modeling of Violent Intent: A Content Analysis Approach

    SciTech Connect

    Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.

    2014-01-03

    We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.

  12. Computational approaches to substrate-based cell motility

    DOE PAGES

    Ziebert, Falko; Aranson, Igor S.

    2016-07-15

    Substrate-based crawling motility of eukaryotic cells is essential for many biological functions, both in developing and mature organisms. Motility dysfunctions are involved in several life-threatening pathologies such as cancer and metastasis. Motile cells are also a natural realization of active, self-propelled ‘particles’, a popular research topic in nonequilibrium physics. Finally, from the materials perspective, assemblies of motile cells and evolving tissues constitute a class of adaptive self-healing materials that respond to the topography, elasticity, and surface chemistry of the environment and react to external stimuli. Although a comprehensive understanding of substrate-based cell motility remains elusive, progress has been achieved recentlymore » in its modeling on the whole cell level. Furthermore we survey the most recent advances in computational approaches to cell movement and demonstrate how these models improve our understanding of complex self-organized systems such as living cells.« less

  13. Computational approaches to substrate-based cell motility

    SciTech Connect

    Ziebert, Falko; Aranson, Igor S.

    2016-07-15

    Substrate-based crawling motility of eukaryotic cells is essential for many biological functions, both in developing and mature organisms. Motility dysfunctions are involved in several life-threatening pathologies such as cancer and metastasis. Motile cells are also a natural realization of active, self-propelled ‘particles’, a popular research topic in nonequilibrium physics. Finally, from the materials perspective, assemblies of motile cells and evolving tissues constitute a class of adaptive self-healing materials that respond to the topography, elasticity, and surface chemistry of the environment and react to external stimuli. Although a comprehensive understanding of substrate-based cell motility remains elusive, progress has been achieved recently in its modeling on the whole cell level. Furthermore we survey the most recent advances in computational approaches to cell movement and demonstrate how these models improve our understanding of complex self-organized systems such as living cells.

  14. Advancing risk assessment of engineered nanomaterials: application of computational approaches.

    PubMed

    Gajewicz, Agnieszka; Rasulev, Bakhtiyor; Dinadayalane, Tandabany C; Urbaszek, Piotr; Puzyn, Tomasz; Leszczynska, Danuta; Leszczynski, Jerzy

    2012-12-01

    Nanotechnology that develops novel materials at size of 100nm or less has become one of the most promising areas of human endeavor. Because of their intrinsic properties, nanoparticles are commonly employed in electronics, photovoltaic, catalysis, environmental and space engineering, cosmetic industry and - finally - in medicine and pharmacy. In that sense, nanotechnology creates great opportunities for the progress of modern medicine. However, recent studies have shown evident toxicity of some nanoparticles to living organisms (toxicity), and their potentially negative impact on environmental ecosystems (ecotoxicity). Lack of available data and low adequacy of experimental protocols prevent comprehensive risk assessment. The purpose of this review is to present the current state of knowledge related to the risks of the engineered nanoparticles and to assess the potential of efficient expansion and development of new approaches, which are offered by application of theoretical and computational methods, applicable for evaluation of nanomaterials. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Systems approaches to computational modeling of the oral microbiome

    PubMed Central

    Dimitrov, Dimiter V.

    2013-01-01

    Current microbiome research has generated tremendous amounts of data providing snapshots of molecular activity in a variety of organisms, environments, and cell types. However, turning this knowledge into whole system level of understanding on pathways and processes has proven to be a challenging task. In this review we highlight the applicability of bioinformatics and visualization techniques to large collections of data in order to better understand the information that contains related diet—oral microbiome—host mucosal transcriptome interactions. In particular, we focus on systems biology of Porphyromonas gingivalis in the context of high throughput computational methods tightly integrated with translational systems medicine. Those approaches have applications for both basic research, where we can direct specific laboratory experiments in model organisms and cell cultures, and human disease, where we can validate new mechanisms and biomarkers for prevention and treatment of chronic disorders. PMID:23847548

  16. A General Computational Approach for Repeat Protein Design

    PubMed Central

    Parmeggiani, Fabio; Huang, Po-Ssu; Vorobiev, Sergey; Xiao, Rong; Park, Keunwan; Caprari, Silvia; Su, Min; Jayaraman, Seetharaman; Mao, Lei; Janjua, Haleema; Montelione, Gaetano T.; Hunt, John; Baker, David

    2014-01-01

    Repeat proteins have considerable potential for use as modular binding reagents or biomaterials in biomedical and nanotechnology applications. Here we describe a general computational method for building idealized repeats that integrates available family sequences and structural information with Rosetta de novo protein design calculations. Idealized designs from six different repeat families were generated and experimentally characterized; 80% of the proteins were expressed and soluble and more than 40% were folded and monomeric with high thermal stability. Crystal structures determined for members of three families are within 1 Å root-mean-square deviation to the design models. The method provides a general approach for fast and reliable generation of stable modular repeat protein scaffolds. PMID:25451037

  17. Local-basis-function approach to computed tomography

    NASA Astrophysics Data System (ADS)

    Hanson, K. M.; Wecksung, G. W.

    1985-12-01

    In the local basis-function approach, a reconstruction is represented as a linear expansion of basis functions, which are arranged on a rectangular grid and possess a local region of support. The basis functions considered here are positive and may overlap. It is found that basis functions based on cubic B-splines offer significant improvements in the calculational accuracy that can be achieved with iterative tomographic reconstruction algorithms. By employing repetitive basis functions, the computational effort involved in these algorithms can be minimized through the use of tabulated values for the line or strip integrals over a single-basis function. The local nature of the basis functions reduces the difficulties associated with applying local constraints on reconstruction values, such as upper and lower limits. Since a reconstruction is specified everywhere by a set of coefficients, display of a coarsely represented image does not require an arbitrary choice of an interpolation function.

  18. Computer aided diagnosis of prostate cancer: A texton based approach

    PubMed Central

    Rampun, Andrik; Tiddeman, Bernie; Zwiggelaar, Reyer; Malcolm, Paul

    2016-01-01

    Purpose: In this paper the authors propose a texton based prostate computer aided diagnosis approach which bypasses the typical feature extraction process such as filtering and convolution which can be computationally expensive. The study focuses the peripheral zone because 75% of prostate cancers start within this region and the majority of prostate cancers arising within this region are more aggressive than those arising in the transitional zone. Methods: For the model development, square patches were extracted at random locations from malignant and benign regions. Subsequently, extracted patches were aggregated and clustered using k-means clustering to generate textons that represent both regions. All textons together form a texton dictionary, which was used to construct a texton map for every peripheral zone in the training images. Based on the texton map, histogram models for each malignant and benign tissue samples were constructed and used as a feature vector to train our classifiers. In the testing phase, four machine learning algorithms were employed to classify each unknown sample tissue based on its corresponding feature vector. Results: The proposed method was tested on 418 T2-W MR images taken from 45 patients. Evaluation results show that the best three classifiers were Bayesian network (Az = 92.8% ± 5.9%), random forest (89.5% ± 7.1%), and k-NN (86.9% ± 7.5%). These results are comparable to the state-of-the-art in the literature. Conclusions: The authors have developed a prostate computer aided diagnosis method based on textons using a single modality of T2-W MRI without the need for the typical feature extraction methods, such as filtering and convolution. The proposed method could form a solid basis for a multimodality magnetic resonance imaging based systems. PMID:27782724

  19. A computational approach for deciphering the organization of glycosaminoglycans.

    PubMed

    Spencer, Jean L; Bernanke, Joel A; Buczek-Thomas, Jo Ann; Nugent, Matthew A

    2010-02-23

    Increasing evidence has revealed important roles for complex glycans as mediators of normal and pathological processes. Glycosaminoglycans are a class of glycans that bind and regulate the function of a wide array of proteins at the cell-extracellular matrix interface. The specific sequence and chemical organization of these polymers likely define function; however, identification of the structure-function relationships of glycosaminoglycans has been met with challenges associated with the unique level of complexity and the nontemplate-driven biosynthesis of these biopolymers. To address these challenges, we have devised a computational approach to predict fine structure and patterns of domain organization of the specific glycosaminoglycan, heparan sulfate (HS). Using chemical composition data obtained after complete and partial digestion of mixtures of HS chains with specific degradative enzymes, the computational analysis produces populations of theoretical HS chains with structures that meet both biosynthesis and enzyme degradation rules. The model performs these operations through a modular format consisting of input/output sections and three routines called chainmaker, chainbreaker, and chainsorter. We applied this methodology to analyze HS preparations isolated from pulmonary fibroblasts and epithelial cells. Significant differences in the general organization of these two HS preparations were observed, with HS from epithelial cells having a greater frequency of highly sulfated domains. Epithelial HS also showed a higher density of specific HS domains that have been associated with inhibition of neutrophil elastase. Experimental analysis of elastase inhibition was consistent with the model predictions and demonstrated that HS from epithelial cells had greater inhibitory activity than HS from fibroblasts. This model establishes the conceptual framework for a new class of computational tools to use to assess patterns of domain organization within

  20. A computational approach for identifying pathogenicity islands in prokaryotic genomes

    PubMed Central

    Yoon, Sung Ho; Hur, Cheol-Goo; Kang, Ho-Young; Kim, Yeoun Hee; Oh, Tae Kwang; Kim, Jihyun F

    2005-01-01

    Background Pathogenicity islands (PAIs), distinct genomic segments of pathogens encoding virulence factors, represent a subgroup of genomic islands (GIs) that have been acquired by horizontal gene transfer event. Up to now, computational approaches for identifying PAIs have been focused on the detection of genomic regions which only differ from the rest of the genome in their base composition and codon usage. These approaches often lead to the identification of genomic islands, rather than PAIs. Results We present a computational method for detecting potential PAIs in complete prokaryotic genomes by combining sequence similarities and abnormalities in genomic composition. We first collected 207 GenBank accessions containing either part or all of the reported PAI loci. In sequenced genomes, strips of PAI-homologs were defined based on the proximity of the homologs of genes in the same PAI accession. An algorithm reminiscent of sequence-assembly procedure was then devised to merge overlapping or adjacent genomic strips into a large genomic region. Among the defined genomic regions, PAI-like regions were identified by the presence of homolog(s) of virulence genes. Also, GIs were postulated by calculating G+C content anomalies and codon usage bias. Of 148 prokaryotic genomes examined, 23 pathogenic and 6 non-pathogenic bacteria contained 77 candidate PAIs that partly or entirely overlap GIs. Conclusion Supporting the validity of our method, included in the list of candidate PAIs were thirty four PAIs previously identified from genome sequencing papers. Furthermore, in some instances, our method was able to detect entire PAIs for those only partial sequences are available. Our method was proven to be an efficient method for demarcating the potential PAIs in our study. Also, the function(s) and origin(s) of a candidate PAI can be inferred by investigating the PAI queries comprising it. Identification and analysis of potential PAIs in prokaryotic genomes will broaden our

  1. Cloud computing approaches for prediction of ligand binding poses and pathways

    PubMed Central

    Lawrenz, Morgan; Shukla, Diwakar; Pande, Vijay S.

    2015-01-01

    We describe an innovative protocol for ab initio prediction of ligand crystallographic binding poses and highly effective analysis of large datasets generated for protein-ligand dynamics. We include a procedure for setup and performance of distributed molecular dynamics simulations on cloud computing architectures, a model for efficient analysis of simulation data, and a metric for evaluation of model convergence. We give accurate binding pose predictions for five ligands ranging in affinity from 7 nM to > 200 μM for the immunophilin protein FKBP12, for expedited results in cases where experimental structures are difficult to produce. Our approach goes beyond single, low energy ligand poses to give quantitative kinetic information that can inform protein engineering and ligand design. PMID:25608737

  2. Examples of computational approaches for elliptic, possibly multiscale PDEs with random inputs

    NASA Astrophysics Data System (ADS)

    Le Bris, Claude; Legoll, Frédéric

    2017-01-01

    We overview a series of recent works addressing numerical simulations of partial differential equations in the presence of some elements of randomness. The specific equations manipulated are linear elliptic, and arise in the context of multiscale problems, but the purpose is more general. On a set of prototypical situations, we investigate two critical issues present in many settings: variance reduction techniques to obtain sufficiently accurate results at a limited computational cost when solving PDEs with random coefficients, and finite element techniques that are sufficiently flexible to carry over to geometries with random fluctuations. Some elements of theoretical analysis and numerical analysis are briefly mentioned. Numerical experiments, although simple, provide convincing evidence of the efficiency of the approaches.

  3. Multi-parametric (ADC/PWI/T2-w) image fusion approach for accurate semi-automatic segmentation of tumorous regions in glioblastoma multiforme.

    PubMed

    Fathi Kazerooni, Anahita; Mohseni, Meysam; Rezaei, Sahar; Bakhshandehpour, Gholamreza; Saligheh Rad, Hamidreza

    2015-02-01

    Glioblastoma multiforme (GBM) brain tumor is heterogeneous in nature, so its quantification depends on how to accurately segment different parts of the tumor, i.e. viable tumor, edema and necrosis. This procedure becomes more effective when metabolic and functional information, provided by physiological magnetic resonance (MR) imaging modalities, like diffusion-weighted-imaging (DWI) and perfusion-weighted-imaging (PWI), is incorporated with the anatomical magnetic resonance imaging (MRI). In this preliminary tumor quantification work, the idea is to characterize different regions of GBM tumors in an MRI-based semi-automatic multi-parametric approach to achieve more accurate characterization of pathogenic regions. For this purpose, three MR sequences, namely T2-weighted imaging (anatomical MR imaging), PWI and DWI of thirteen GBM patients, were acquired. To enhance the delineation of the boundaries of each pathogenic region (peri-tumoral edema, viable tumor and necrosis), the spatial fuzzy C-means algorithm is combined with the region growing method. The results show that exploiting the multi-parametric approach along with the proposed semi-automatic segmentation method can differentiate various tumorous regions with over 80 % sensitivity, specificity and dice score. The proposed MRI-based multi-parametric segmentation approach has the potential to accurately segment tumorous regions, leading to an efficient design of the pre-surgical treatment planning.

  4. A computational approach for prediction of donor splice sites with improved accuracy.

    PubMed

    Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Rao, A R; Wahi, S D

    2016-09-07

    Identification of splice sites is important due to their key role in predicting the exon-intron structure of protein coding genes. Though several approaches have been developed for the prediction of splice sites, further improvement in the prediction accuracy will help predict gene structure more accurately. This paper presents a computational approach for prediction of donor splice sites with higher accuracy. In this approach, true and false splice sites were first encoded into numeric vectors and then used as input in artificial neural network (ANN), support vector machine (SVM) and random forest (RF) for prediction. ANN and SVM were found to perform equally and better than RF, while tested on HS3D and NN269 datasets. Further, the performance of ANN, SVM and RF were analyzed by using an independent test set of 50 genes and found that the prediction accuracy of ANN was higher than that of SVM and RF. All the predictors achieved higher accuracy while compared with the existing methods like NNsplice, MEM, MDD, WMM, MM1, FSPLICE, GeneID and ASSP, using the independent test set. We have also developed an online prediction server (PreDOSS) available at http://cabgrid.res.in:8080/predoss, for prediction of donor splice sites using the proposed approach. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Fast and Accurate Data Extraction for Near Real-Time Registration of 3-D Ultrasound and Computed Tomography in Orthopedic Surgery.

    PubMed

    Brounstein, Anna; Hacihaliloglu, Ilker; Guy, Pierre; Hodgson, Antony; Abugharbieh, Rafeef

    2015-12-01

    Automatic, accurate and real-time registration is an important step in providing effective guidance and successful anatomic restoration in ultrasound (US)-based computer assisted orthopedic surgery. We propose a method in which local phase-based bone surfaces, extracted from intra-operative US data, are registered to pre-operatively segmented computed tomography data. Extracted bone surfaces are downsampled and reinforced with high curvature features. A novel hierarchical simplification algorithm is used to further optimize the point clouds. The final point clouds are represented as Gaussian mixture models and iteratively matched by minimizing the dissimilarity between them using an L2 metric. For 44 clinical data sets from 25 pelvic fracture patients and 49 phantom data sets, we report mean surface registration accuracies of 0.31 and 0.77 mm, respectively, with an average registration time of 1.41 s. Our results suggest the viability and potential of the chosen method for real-time intra-operative registration in orthopedic surgery.

  6. Computing minimal entropy production trajectories: an approach to model reduction in chemical kinetics.

    PubMed

    Lebiedz, D

    2004-04-15

    Advanced experimental techniques in chemistry and physics provide increasing access to detailed deterministic mass action models for chemical reaction kinetics. Especially in complex technical or biochemical systems the huge amount of species and reaction pathways involved in a detailed modeling approach call for efficient methods of model reduction. These should be automatic and based on a firm mathematical analysis of the ordinary differential equations underlying the chemical kinetics in deterministic models. A main purpose of model reduction is to enable accurate numerical simulations of even high dimensional and spatially extended reaction systems. The latter include physical transport mechanisms and are modeled by partial differential equations. Their numerical solution for hundreds or thousands of species within a reasonable time will exceed computer capacities available now and in a foreseeable future. The central idea of model reduction is to replace the high dimensional dynamics by a low dimensional approximation with an appropriate degree of accuracy. Here I present a global approach to model reduction based on the concept of minimal entropy production and its numerical implementation. For given values of a single species concentration in a chemical system all other species concentrations are computed under the assumption that the system is as close as possible to its attractor, the thermodynamic equilibrium, in the sense that all modes of thermodynamic forces are maximally relaxed except the one, which drives the remaining system dynamics. This relaxation is expressed in terms of minimal entropy production for single reaction steps along phase space trajectories. (c) 2004 American Institute of Physics.

  7. Computing minimal entropy production trajectories: An approach to model reduction in chemical kinetics

    NASA Astrophysics Data System (ADS)

    Lebiedz, D.

    2004-04-01

    Advanced experimental techniques in chemistry and physics provide increasing access to detailed deterministic mass action models for chemical reaction kinetics. Especially in complex technical or biochemical systems the huge amount of species and reaction pathways involved in a detailed modeling approach call for efficient methods of model reduction. These should be automatic and based on a firm mathematical analysis of the ordinary differential equations underlying the chemical kinetics in deterministic models. A main purpose of model reduction is to enable accurate numerical simulations of even high dimensional and spatially extended reaction systems. The latter include physical transport mechanisms and are modeled by partial differential equations. Their numerical solution for hundreds or thousands of species within a reasonable time will exceed computer capacities available now and in a foreseeable future. The central idea of model reduction is to replace the high dimensional dynamics by a low dimensional approximation with an appropriate degree of accuracy. Here I present a global approach to model reduction based on the concept of minimal entropy production and its numerical implementation. For given values of a single species concentration in a chemical system all other species concentrations are computed under the assumption that the system is as close as possible to its attractor, the thermodynamic equilibrium, in the sense that all modes of thermodynamic forces are maximally relaxed except the one, which drives the remaining system dynamics. This relaxation is expressed in terms of minimal entropy production for single reaction steps along phase space trajectories.

  8. A Computational Approach to Studying Protein Folding Problems Considering the Crucial Role of the Intracellular Environment.

    PubMed

    González-Pérez, Pedro P; Orta, Daniel J; Peña, Irving; Flores, Eduardo C; Ramírez, José U; Beltrán, Hiram I; Alas, Salomón J

    2017-10-01

    Intracellular protein folding (PF) is performed in a highly inhomogeneous, crowded, and correlated environment. Due to this inherent complexity, the study and understanding of PF phenomena is a fundamental issue in the field of computational systems biology. In particular, it is important to use a modeled medium that accurately reflects PF in natural systems. In the current study, we present a simulation wherein PF is carried out within an inhomogeneous modeled medium. Simulation resources included a two-dimensional hydrophobic-polar (HP) model, evolutionary algorithms, and the dual site-bond model. The dual site-bond model was used to develop an environment where HP beads could be folded. Our modeled medium included correlation lengths and fractal-like behavior, which were selected according to HP sequence lengths to induce folding in a crowded environment. Analysis of three benchmark HP sequences showed that the modeled inhomogeneous space played an important role in deeper energy folding and obtained better performance and convergence compared with homogeneous environments. Our computational approach also demonstrated that our correlated network provided a better space for PF. Thus, our approach represents a major advancement in PF simulations, not only for folding but also for understanding functional chemical structure and physicochemical properties of proteins in crowded molecular systems, which normally occur in nature.

  9. A Comprehensive Approach for Accurate Measurement of Proton-Proton Coupling Constants in the Sugar Ring of DNA

    SciTech Connect

    Yang, Jiping; McAteer, Kathleen; Silks, Louis A.; Wu, Ruilian; Isern, Nancy G.; Unkefer, Clifford J.; Kennedy, Michael A.

    2000-10-01

    Stereo-selective deuteration has been explored in the 12 base pair DNA Dickerson sequence [d(CGCGAATTCGCG)2] as an approach for improving the accuracy of NMR derived, three-bond vicinal proton-proton coupling constants. The coupling constants are useful for DNA structure determination in restrained molecular dynamics calculations.

  10. Simultaneous Median-Radial Nerve Electrical Stimulation Revisited: An Accurate Approach to Carpal Tunnel Syndrome Diagnosis and Severity.

    PubMed

    Rodrigues, Thaís; Winckler, Pablo B; Félix-Torres, Vitor; Schestatsky, Pedro

    2016-12-01

    To assess the accuracy of an unusual test for CTS investigation and correlate it with clinical symptoms. Initially, we applied a visual analog scale for CTS discomfort (CTS-VAS) and performed a standard electrophysiologic test for CTS diagnosis (median-ulnar velocity comparison). Posteriorly, a blinded neurophysiologist performed the orthodromic simultaneous median-radial nerve stimulation (SMRS) at the thumb, with recording of both action potentials over the lateral aspect of the wrist. All hands (106) showed median-radial action potential splitting using the SMRS technique, in which was possible to measure the interpeak latencies (IPLs) between action potentials. The IPL and median nerve conduction velocity were different according to CTS intensity (Bonferroni; P < 0.001). There was significant correlation between IPL and median nerve conduction velocity (Spearman; r = -0.51; P < 0.01). In the same way, there was a significant correlation between IPL and median nerve conduction velocity with CTS-VAS (r = 0.6 and r = -0.3, respectively). The duration and unpleasantness of the SMRS procedure were lower when compared with standard approach (t Student < 0.001 for both comparisons). Twenty-nine symptomatic patients (39 hands) who did not fulfill criteria for CTS based on standard approach showed abnormal IPLs. The SMRS technique is a simple, sensitive, and tolerable approach for CTS diagnosis. Apart from that, the data from SMRS correlated better with clinical impact of CTS in comparison with the standard approach. Therefore, this method might be useful as adjunct to standard electrophysiologic approaches in clinical practice.

  11. An Evolutionary Computation Approach to Examine Functional Brain Plasticity.

    PubMed

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A; Hillary, Frank G

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  12. An Evolutionary Computation Approach to Examine Functional Brain Plasticity

    PubMed Central

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A.; Hillary, Frank G.

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  13. A systematic approach for the accurate and rapid measurement of water vapor transmission through ultra-high barrier films

    NASA Astrophysics Data System (ADS)

    Kiese, Sandra; Kücükpinar, Esra; Reinelt, Matthias; Miesbauer, Oliver; Ewender, Johann; Langowski, Horst-Christian

    2017-02-01

    Flexible organic electronic devices are often protected from degradation by encapsulation in multilayered films with very high barrier properties against moisture and oxygen. However, metrology must be improved to detect such low quantities of permeants. We therefore developed a modified ultra-low permeation measurement device based on a constant-flow carrier-gas system to measure both the transient and stationary water vapor permeation through high-performance barrier films. The accumulation of permeated water vapor before its transport to the detector allows the measurement of very low water vapor transmission rates (WVTRs) down to 2 × 10-5 g m-2 d-1. The measurement cells are stored in a temperature-controlled chamber, allowing WVTR measurements within the temperature range 23-80 °C. Differences in relative humidity can be controlled within the range 15%-90%. The WVTR values determined using the novel measurement device agree with those measured using a commercially available carrier-gas device from MOCON®. Depending on the structure and quality of the barrier film, it may take a long time for the WVTR to reach a steady-state value. However, by using a combination of the time-dependent measurement and the finite element method, we were able to estimate the steady-state WVTR accurately with significantly shorter measurement times.

  14. Non-invasive prenatal diagnosis of achondroplasia and thanatophoric dysplasia: next-generation sequencing allows for a safer, more accurate, and comprehensive approach.

    PubMed

    Chitty, Lyn S; Mason, Sarah; Barrett, Angela N; McKay, Fiona; Lench, Nicholas; Daley, Rebecca; Jenkins, Lucy A

    2015-07-01

    Accurate prenatal diagnosis of genetic conditions can be challenging and usually requires invasive testing. Here, we demonstrate the potential of next-generation sequencing (NGS) for the analysis of cell-free DNA in maternal blood to transform prenatal diagnosis of monogenic disorders. Analysis of cell-free DNA using a PCR and restriction enzyme digest (PCR-RED) was compared with a novel NGS assay in pregnancies at risk of achondroplasia and thanatophoric dysplasia. PCR-RED was performed in 72 cases and was correct in 88.6%, inconclusive in 7% with one false negative. NGS was performed in 47 cases and was accurate in 96.2% with no inconclusives. Both approaches were used in 27 cases, with NGS giving the correct result in the two cases inconclusive with PCR-RED. NGS provides an accurate, flexible approach to non-invasive prenatal diagnosis of de novo and paternally inherited mutations. It is more sensitive than PCR-RED and is ideal when screening a gene with multiple potential pathogenic mutations. These findings highlight the value of NGS in the development of non-invasive prenatal diagnosis for other monogenic disorders. © 2015 John Wiley & Sons, Ltd.

  15. Accurate lumen diameter measurement in curved vessels in carotid ultrasound: an iterative scale-space and spatial transformation approach.

    PubMed

    Krishna Kumar, P; Araki, Tadashi; Rajan, Jeny; Saba, Luca; Lavra, Francesco; Ikeda, Nobutaka; Sharma, Aditya M; Shafique, Shoaib; Nicolaides, Andrew; Laird, John R; Gupta, Ajay; Suri, Jasjit S

    2016-12-10

    Monitoring of cerebrovascular diseases via carotid ultrasound has started to become a routine. The measurement of image-based lumen diameter (LD) or inter-adventitial diameter (IAD) is a promising approach for quantification of the degree of stenosis. The manual measurements of LD/IAD are not reliable, subjective and slow. The curvature associated with the vessels along with non-uniformity in the plaque growth poses further challenges. This study uses a novel and generalized approach for automated LD and IAD measurement based on a combination of spatial transformation and scale-space. In this iterative procedure, the scale-space is first used to get the lumen axis which is then used with spatial image transformation paradigm to get a transformed image. The scale-space is then reapplied to retrieve the lumen region and boundary in the transformed framework. Then, inverse transformation is applied to display the results in original image framework. Two hundred and two patients' left and right common carotid artery (404 carotid images) B-mode ultrasound images were retrospectively analyzed. The validation of our algorithm has done against the two manual expert tracings. The coefficient of correlation between the two manual tracings for LD was 0.98 (p < 0.0001) and 0.99 (p < 0.0001), respectively. The precision of merit between the manual expert tracings and the automated system was 97.7 and 98.7%, respectively. The experimental analysis demonstrated superior performance of the proposed method over conventional approaches. Several statistical tests demonstrated the stability and reliability of the automated system.

  16. Accurate and reproducible invasive breast cancer detection in whole-slide images: A Deep Learning approach for quantifying tumor extent

    NASA Astrophysics Data System (ADS)

    Cruz-Roa, Angel; Gilmore, Hannah; Basavanhally, Ajay; Feldman, Michael; Ganesan, Shridar; Shih, Natalie N. C.; Tomaszewski, John; González, Fabio A.; Madabhushi, Anant

    2017-04-01

    With the increasing ability to routinely and rapidly digitize whole slide images with slide scanners, there has been interest in developing computerized image analysis algorithms for automated detection of disease extent from digital pathology images. The manual identification of presence and extent of breast cancer by a pathologist is critical for patient management for tumor staging and assessing treatment response. However, this process is tedious and subject to inter- and intra-reader variability. For computerized methods to be useful as decision support tools, they need to be resilient to data acquired from different sources, different staining and cutting protocols and different scanners. The objective of this study was to evaluate the accuracy and robustness of a deep learning-based method to automatically identify the extent of invasive tumor on digitized images. Here, we present a new method that employs a convolutional neural network for detecting presence of invasive tumor on whole slide images. Our approach involves training the classifier on nearly 400 exemplars from multiple different sites, and scanners, and then independently validating on almost 200 cases from The Cancer Genome Atlas. Our approach yielded a Dice coefficient of 75.86%, a positive predictive value of 71.62% and a negative predictive value of 96.77% in terms of pixel-by-pixel evaluation compared to manually annotated regions of invasive ductal carcinoma.

  17. Computational approaches to selecting and optimising targets for structural biology

    PubMed Central

    Overton, Ian M.; Barton, Geoffrey J.

    2011-01-01

    Selection of protein targets for study is central to structural biology and may be influenced by numerous factors. A key aim is to maximise returns for effort invested by identifying proteins with the balance of biophysical properties that are conducive to success at all stages (e.g. solubility, crystallisation) in the route towards a high resolution structural model. Selected targets can be optimised through construct design (e.g. to minimise protein disorder), switching to a homologous protein, and selection of experimental methodology (e.g. choice of expression system) to prime for efficient progress through the structural proteomics pipeline. Here we discuss computational techniques in target selection and optimisation, with more detailed focus on tools developed within the Scottish Structural Proteomics Facility (SSPF); namely XANNpred, ParCrys, OB-Score (target selection) and TarO (target optimisation). TarO runs a large number of algorithms, searching for homologues and annotating the pool of possible alternative targets. This pool of putative homologues is presented in a ranked, tabulated format and results are also visualised as an automatically generated and annotated multiple sequence alignment. The target selection algorithms each predict the propensity of a selected protein target to progress through the experimental stages leading to diffracting crystals. This single predictor approach has advantages for target selection, when compared with an approach using two or more predictors that each predict for success at a single experimental stage. The tools described here helped SSPF achieve a high (21%) success rate in progressing cloned targets to diffraction-quality crystals. PMID:21906678

  18. A Novel Approach to Segment and Classify Regional Lymph Nodes on Computed Tomography Images

    PubMed Central

    Cai, Hongmin; Cui, Chunyan; Tian, Haiying; Zhang, Min; Li, Li

    2012-01-01

    Morphology of lymph nodal metastasis is critical for diagnosis and prognosis of cancer patients. However, accurate prediction of lymph node type based on morphological information is rarely available due to lack of pathological validation. To obtain correct morphological information, lymph nodes must be segmented from computed tomography (CT) image accurately. In this paper we described a novel approach to segment and predict the status of lymph nodes from CT images and confirmed the diagnostic performance by clinical pathological results. We firstly removed noise and preserved edge details using a revised nonlinear diffusion equation, and secondly we used a repulsive-force-based snake method to segment the lymph nodes. Morphological measurements for the characterization of the node status were obtained from the segmented node image. These measurements were further selected to derive a highly representative set of node status, called feature vector. Finally, classical classification scheme based on support vector machine model was employed to simulate the prediction of nodal status. Experiments on real clinical rectal cancer data showed that the prediction performance with the proposed framework is highly consistent with pathological results. Therefore, this novel algorithm is promising for status prediction of lymph nodes. PMID:23193427

  19. Weighting Strategies for Single-Step Genomic BLUP: An Iterative Approach for Accurate Calculation of GEBV and GWAS

    PubMed Central

    Zhang, Xinyue; Lourenco, Daniela; Aguilar, Ignacio; Leg