Science.gov

Sample records for accurate computational approach

  1. A fast and accurate computational approach to protein ionization

    PubMed Central

    Spassov, Velin Z.; Yan, Lisa

    2008-01-01

    We report a very fast and accurate physics-based method to calculate pH-dependent electrostatic effects in protein molecules and to predict the pK values of individual sites of titration. In addition, a CHARMm-based algorithm is included to construct and refine the spatial coordinates of all hydrogen atoms at a given pH. The present method combines electrostatic energy calculations based on the Generalized Born approximation with an iterative mobile clustering approach to calculate the equilibria of proton binding to multiple titration sites in protein molecules. The use of the GBIM (Generalized Born with Implicit Membrane) CHARMm module makes it possible to model not only water-soluble proteins but membrane proteins as well. The method includes a novel algorithm for preliminary refinement of hydrogen coordinates. Another difference from existing approaches is that, instead of monopeptides, a set of relaxed pentapeptide structures are used as model compounds. Tests on a set of 24 proteins demonstrate the high accuracy of the method. On average, the RMSD between predicted and experimental pK values is close to 0.5 pK units on this data set, and the accuracy is achieved at very low computational cost. The pH-dependent assignment of hydrogen atoms also shows very good agreement with protonation states and hydrogen-bond network observed in neutron-diffraction structures. The method is implemented as a computational protocol in Accelrys Discovery Studio and provides a fast and easy way to study the effect of pH on many important mechanisms such as enzyme catalysis, ligand binding, protein–protein interactions, and protein stability. PMID:18714088

  2. A simplified approach to characterizing a kilovoltage source spectrum for accurate dose computation

    SciTech Connect

    Poirier, Yannick; Kouznetsov, Alexei; Tambasco, Mauro

    2012-06-15

    % for the homogeneous and heterogeneous block phantoms, and agreement for the transverse dose profiles was within 6%. Conclusions: The HVL and kVp are sufficient for characterizing a kV x-ray source spectrum for accurate dose computation. As these parameters can be easily and accurately measured, they provide for a clinically feasible approach to characterizing a kV energy spectrum to be used for patient specific x-ray dose computations. Furthermore, these results provide experimental validation of our novel hybrid dose computation algorithm.

  3. A streamline splitting pore-network approach for computationally inexpensive and accurate simulation of transport in porous media

    SciTech Connect

    Mehmani, Yashar; Oostrom, Martinus; Balhoff, Matthew

    2014-03-20

    Several approaches have been developed in the literature for solving flow and transport at the pore-scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modeling flow and transport at the pore-scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and excellent matches were obtained against micromodel experiments across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3D disordered granular media.

  4. A streamline splitting pore-network approach for computationally inexpensive and accurate simulation of transport in porous media

    NASA Astrophysics Data System (ADS)

    Mehmani, Yashar; Oostrom, Mart; Balhoff, Matthew T.

    2014-03-01

    Several approaches have been developed in the literature for solving flow and transport at the pore scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modeling flow and transport at the pore scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect-mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and validated against micromodel experiments; excellent matches were obtained across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3-D disordered granular media.

  5. A robust and accurate approach to computing compressible multiphase flow: Stratified flow model and AUSM{sup +}-up scheme

    SciTech Connect

    Chang, Chih-Hao . E-mail: chchang@engineering.ucsb.edu; Liou, Meng-Sing . E-mail: meng-sing.liou@grc.nasa.gov

    2007-07-01

    In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM{sup +} scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM{sup +}-up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion.

  6. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  7. Two-component density functional theory within the projector augmented-wave approach: Accurate and self-consistent computations of positron lifetimes and momentum distributions

    NASA Astrophysics Data System (ADS)

    Wiktor, Julia; Jomard, Gérald; Torrent, Marc

    2015-09-01

    Many techniques have been developed in the past in order to compute positron lifetimes in materials from first principles. However, there is still a lack of a fast and accurate self-consistent scheme that could handle accurately the forces acting on the ions induced by the presence of the positron. We will show in this paper that we have reached this goal by developing the two-component density functional theory within the projector augmented-wave (PAW) method in the open-source code abinit. This tool offers the accuracy of the all-electron methods with the computational efficiency of the plane-wave ones. We can thus deal with supercells that contain few hundreds to thousands of atoms to study point defects as well as more extended defects clusters. Moreover, using the PAW basis set allows us to use techniques able to, for instance, treat strongly correlated systems or spin-orbit coupling, which are necessary to study heavy elements, such as the actinides or their compounds.

  8. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  9. Computing Accurate Grammatical Feedback in a Virtual Writing Conference for German-Speaking Elementary-School Children: An Approach Based on Natural Language Generation

    ERIC Educational Resources Information Center

    Harbusch, Karin; Itsova, Gergana; Koch, Ulrich; Kuhner, Christine

    2009-01-01

    We built a natural language processing (NLP) system implementing a "virtual writing conference" for elementary-school children, with German as the target language. Currently, state-of-the-art computer support for writing tasks is restricted to multiple-choice questions or quizzes because automatic parsing of the often ambiguous and fragmentary…

  10. Photoacoustic computed tomography without accurate ultrasonic transducer responses

    NASA Astrophysics Data System (ADS)

    Sheng, Qiwei; Wang, Kun; Xia, Jun; Zhu, Liren; Wang, Lihong V.; Anastasio, Mark A.

    2015-03-01

    Conventional photoacoustic computed tomography (PACT) image reconstruction methods assume that the object and surrounding medium are described by a constant speed-of-sound (SOS) value. In order to accurately recover fine structures, SOS heterogeneities should be quantified and compensated for during PACT reconstruction. To address this problem, several groups have proposed hybrid systems that combine PACT with ultrasound computed tomography (USCT). In such systems, a SOS map is reconstructed first via USCT. Consequently, this SOS map is employed to inform the PACT reconstruction method. Additionally, the SOS map can provide structural information regarding tissue, which is complementary to the functional information from the PACT image. We propose a paradigm shift in the way that images are reconstructed in hybrid PACT-USCT imaging. Inspired by our observation that information about the SOS distribution is encoded in PACT measurements, we propose to jointly reconstruct the absorbed optical energy density and SOS distributions from a combined set of USCT and PACT measurements, thereby reducing the two reconstruction problems into one. This innovative approach has several advantages over conventional approaches in which PACT and USCT images are reconstructed independently: (1) Variations in the SOS will automatically be accounted for, optimizing PACT image quality; (2) The reconstructed PACT and USCT images will possess minimal systematic artifacts because errors in the imaging models will be optimally balanced during the joint reconstruction; (3) Due to the exploitation of information regarding the SOS distribution in the full-view PACT data, our approach will permit high-resolution reconstruction of the SOS distribution from sparse array data.

  11. Highly Accurate Inverse Consistent Registration: A Robust Approach

    PubMed Central

    Reuter, Martin; Rosas, H. Diana; Fischl, Bruce

    2010-01-01

    The registration of images is a task that is at the core of many applications in computer vision. In computational neuroimaging where the automated segmentation of brain structures is frequently used to quantify change, a highly accurate registration is necessary for motion correction of images taken in the same session, or across time in longitudinal studies where changes in the images can be expected. This paper, inspired by Nestares and Heeger (2000), presents a method based on robust statistics to register images in the presence of differences, such as jaw movement, differential MR distortions and true anatomical change. The approach we present guarantees inverse consistency (symmetry), can deal with different intensity scales and automatically estimates a sensitivity parameter to detect outlier regions in the images. The resulting registrations are highly accurate due to their ability to ignore outlier regions and show superior robustness with respect to noise, to intensity scaling and outliers when compared to state-of-the-art registration tools such as FLIRT (in FSL) or the coregistration tool in SPM. PMID:20637289

  12. Efficient and accurate computation of generalized singular-value decompositions

    NASA Astrophysics Data System (ADS)

    Drmac, Zlatko

    2001-11-01

    We present a new family of algorithms for accurate floating--point computation of the singular value decomposition (SVD) of various forms of products (quotients) of two or three matrices. The main goal of such an algorithm is to compute all singular values to high relative accuracy. This means that we are seeking guaranteed number of accurate digits even in the smallest singular values. We also want to achieve computational efficiency, while maintaining high accuracy. To illustrate, consider the SVD of the product A=BTSC. The new algorithm uses certain preconditioning (based on diagonal scalings, the LU and QR factorizations) to replace A with A'=(B')TS'C', where A and A' have the same singular values and the matrix A' is computed explicitly. Theoretical analysis and numerical evidence show that, in the case of full rank B, C, S, the accuracy of the new algorithm is unaffected by replacing B, S, C with, respectively, D1B, D2SD3, D4C, where Di, i=1,...,4 are arbitrary diagonal matrices. As an application, the paper proposes new accurate algorithms for computing the (H,K)-SVD and (H1,K)-SVD of S.

  13. A fast approach for accurate content-adaptive mesh generation.

    PubMed

    Yang, Yongyi; Wernick, Miles N; Brankov, Jovan G

    2003-01-01

    Mesh modeling is an important problem with many applications in image processing. A key issue in mesh modeling is how to generate a mesh structure that well represents an image by adapting to its content. We propose a new approach to mesh generation, which is based on a theoretical result derived on the error bound of a mesh representation. In the proposed method, the classical Floyd-Steinberg error-diffusion algorithm is employed to place mesh nodes in the image domain so that their spatial density varies according to the local image content. Delaunay triangulation is next applied to connect the mesh nodes. The result of this approach is that fine mesh elements are placed automatically in regions of the image containing high-frequency features while coarse mesh elements are used to represent smooth areas. The proposed algorithm is noniterative, fast, and easy to implement. Numerical results demonstrate that, at very low computational cost, the proposed approach can produce mesh representations that are more accurate than those produced by several existing methods. Moreover, it is demonstrated that the proposed algorithm performs well with images of various kinds, even in the presence of noise. PMID:18237961

  14. Accurate Langevin approaches to simulate Markovian channel dynamics

    NASA Astrophysics Data System (ADS)

    Huang, Yandong; Rüdiger, Sten; Shuai, Jianwei

    2015-12-01

    The stochasticity of ion-channels dynamic is significant for physiological processes on neuronal cell membranes. Microscopic simulations of the ion-channel gating with Markov chains can be considered to be an accurate standard. However, such Markovian simulations are computationally demanding for membrane areas of physiologically relevant sizes, which makes the noise-approximating or Langevin equation methods advantageous in many cases. In this review, we discuss the Langevin-like approaches, including the channel-based and simplified subunit-based stochastic differential equations proposed by Fox and Lu, and the effective Langevin approaches in which colored noise is added to deterministic differential equations. In the framework of Fox and Lu’s classical models, several variants of numerical algorithms, which have been recently developed to improve accuracy as well as efficiency, are also discussed. Through the comparison of different simulation algorithms of ion-channel noise with the standard Markovian simulation, we aim to reveal the extent to which the existing Langevin-like methods approximate results using Markovian methods. Open questions for future studies are also discussed.

  15. Measurement of Fracture Geometry for Accurate Computation of Hydraulic Conductivity

    NASA Astrophysics Data System (ADS)

    Chae, B.; Ichikawa, Y.; Kim, Y.

    2003-12-01

    Fluid flow in rock mass is controlled by geometry of fractures which is mainly characterized by roughness, aperture and orientation. Fracture roughness and aperture was observed by a new confocal laser scanning microscope (CLSM; Olympus OLS1100). The wavelength of laser is 488nm, and the laser scanning is managed by a light polarization method using two galvano-meter scanner mirrors. The system improves resolution in the light axis (namely z) direction because of the confocal optics. The sampling is managed in a spacing 2.5 μ m along x and y directions. The highest measurement resolution of z direction is 0.05 μ m, which is the more accurate than other methods. For the roughness measurements, core specimens of coarse and fine grained granites were provided. Measurements were performed along three scan lines on each fracture surface. The measured data were represented as 2-D and 3-D digital images showing detailed features of roughness. Spectral analyses by the fast Fourier transform (FFT) were performed to characterize on the roughness data quantitatively and to identify influential frequency of roughness. The FFT results showed that components of low frequencies were dominant in the fracture roughness. This study also verifies that spectral analysis is a good approach to understand complicate characteristics of fracture roughness. For the aperture measurements, digital images of the aperture were acquired under applying five stages of uniaxial normal stresses. This method can characterize the response of aperture directly using the same specimen. Results of measurements show that reduction values of aperture are different at each part due to rough geometry of fracture walls. Laboratory permeability tests were also conducted to evaluate changes of hydraulic conductivities related to aperture variation due to different stress levels. The results showed non-uniform reduction of hydraulic conductivity under increase of the normal stress and different values of

  16. Neutron supermirrors: an accurate theory for layer thickness computation

    NASA Astrophysics Data System (ADS)

    Bray, Michael

    2001-11-01

    We present a new theory for the computation of Super-Mirror stacks, using accurate formulas derived from the classical optics field. Approximations are introduced into the computation, but at a later stage than existing theories, providing a more rigorous treatment of the problem. The final result is a continuous thickness stack, whose properties can be determined at the outset of the design. We find that the well-known fourth power dependence of number of layers versus maximum angle is (of course) asymptotically correct. We find a formula giving directly the relation between desired reflectance, maximum angle, and number of layers (for a given pair of materials). Note: The author of this article, a classical opticist, has limited knowledge of the Neutron world, and begs forgiveness for any shortcomings, erroneous assumptions and/or misinterpretation of previous authors' work on the subject.

  17. Accurate Computation of Survival Statistics in Genome-Wide Studies

    PubMed Central

    Vandin, Fabio; Papoutsaki, Alexandra; Raphael, Benjamin J.; Upfal, Eli

    2015-01-01

    A key challenge in genomics is to identify genetic variants that distinguish patients with different survival time following diagnosis or treatment. While the log-rank test is widely used for this purpose, nearly all implementations of the log-rank test rely on an asymptotic approximation that is not appropriate in many genomics applications. This is because: the two populations determined by a genetic variant may have very different sizes; and the evaluation of many possible variants demands highly accurate computation of very small p-values. We demonstrate this problem for cancer genomics data where the standard log-rank test leads to many false positive associations between somatic mutations and survival time. We develop and analyze a novel algorithm, Exact Log-rank Test (ExaLT), that accurately computes the p-value of the log-rank statistic under an exact distribution that is appropriate for any size populations. We demonstrate the advantages of ExaLT on data from published cancer genomics studies, finding significant differences from the reported p-values. We analyze somatic mutations in six cancer types from The Cancer Genome Atlas (TCGA), finding mutations with known association to survival as well as several novel associations. In contrast, standard implementations of the log-rank test report dozens-hundreds of likely false positive associations as more significant than these known associations. PMID:25950620

  18. Direct computation of parameters for accurate polarizable force fields

    SciTech Connect

    Verstraelen, Toon Vandenbrande, Steven; Ayers, Paul W.

    2014-11-21

    We present an improved electronic linear response model to incorporate polarization and charge-transfer effects in polarizable force fields. This model is a generalization of the Atom-Condensed Kohn-Sham Density Functional Theory (DFT), approximated to second order (ACKS2): it can now be defined with any underlying variational theory (next to KS-DFT) and it can include atomic multipoles and off-center basis functions. Parameters in this model are computed efficiently as expectation values of an electronic wavefunction, obviating the need for their calibration, regularization, and manual tuning. In the limit of a complete density and potential basis set in the ACKS2 model, the linear response properties of the underlying theory for a given molecular geometry are reproduced exactly. A numerical validation with a test set of 110 molecules shows that very accurate models can already be obtained with fluctuating charges and dipoles. These features greatly facilitate the development of polarizable force fields.

  19. An Accurate and Dynamic Computer Graphics Muscle Model

    NASA Technical Reports Server (NTRS)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  20. Computational vaccinology: quantitative approaches.

    PubMed

    Flower, Darren R; McSparron, Helen; Blythe, Martin J; Zygouri, Christianna; Taylor, Debra; Guan, Pingping; Wan, Shouzhan; Coveney, Peter V; Walshe, Valerie; Borrow, Persephone; Doytchinova, Irini A

    2003-01-01

    The immune system is hierarchical and has many levels, exhibiting much emergent behaviour. However, at its heart are molecular recognition events that are indistinguishable from other types of biomacromolecular interaction. These can be addressed well by quantitative experimental and theoretical biophysical techniques, and particularly by methods from drug design. We review here our approach to computational immunovaccinology. In particular, we describe the JenPep database and two new techniques for T cell epitope prediction. One is based on quantitative structure-activity relationships (a 3D-QSAR method based on CoMSIA and another 2D method based on the Free-Wilson approach) and the other on atomistic molecular dynamic simulations using high performance computing. JenPep (http://www.jenner.ar.uk/ JenPep) is a relational database system supporting quantitative data on peptide binding to major histocompatibility complexes, TAP transporters, TCR-pMHC complexes, and an annotated list of B cell and T cell epitopes. Our 2D-QSAR method factors the contribution to peptide binding from individual amino acids as well as 1-2 and 1-3 residue interactions. In the 3D-QSAR approach, the influence of five physicochemical properties (volume, electrostatic potential, hydrophobicity, hydrogen-bond donor and acceptor abilities) on peptide affinity were considered. Both methods are exemplified through their application to the well-studied problem of peptide binding to the human class I MHC molecule HLA-A*0201. PMID:14712934

  1. Towards a scalable and accurate quantum approach for describing vibrations of molecule–metal interfaces

    PubMed Central

    Madebene, Bruno; Ulusoy, Inga; Mancera, Luis; Scribano, Yohann; Chulkov, Sergey

    2011-01-01

    Summary We present a theoretical framework for the computation of anharmonic vibrational frequencies for large systems, with a particular focus on determining adsorbate frequencies from first principles. We give a detailed account of our local implementation of the vibrational self-consistent field approach and its correlation corrections. We show that our approach is both robust, accurate and can be easily deployed on computational grids in order to provide an efficient computational tool. We also present results on the vibrational spectrum of hydrogen fluoride on pyrene, on the thiophene molecule in the gas phase, and on small neutral gold clusters. PMID:22003450

  2. A unique approach to accurately measure thickness in thick multilayers.

    PubMed

    Shi, Bing; Hiller, Jon M; Liu, Yuzi; Liu, Chian; Qian, Jun; Gades, Lisa; Wieczorek, Michael J; Marander, Albert T; Maser, Jorg; Assoufid, Lahsen

    2012-05-01

    X-ray optics called multilayer Laue lenses (MLLs) provide a promising path to focusing hard X-rays with high focusing efficiency at a resolution between 5 nm and 20 nm. MLLs consist of thousands of depth-graded thin layers. The thickness of each layer obeys the linear zone plate law. X-ray beamline tests have been performed on magnetron sputter-deposited WSi(2)/Si MLLs at the Advanced Photon Source/Center for Nanoscale Materials 26-ID nanoprobe beamline. However, it is still very challenging to accurately grow each layer at the designed thickness during deposition; errors introduced during thickness measurements of thousands of layers lead to inaccurate MLL structures. Here, a new metrology approach that can accurately measure thickness by introducing regular marks on the cross section of thousands of layers using a focused ion beam is reported. This new measurement method is compared with a previous method. More accurate results are obtained using the new measurement approach. PMID:22514179

  3. Fully computed holographic stereogram based algorithm for computer-generated holograms with accurate depth cues.

    PubMed

    Zhang, Hao; Zhao, Yan; Cao, Liangcai; Jin, Guofan

    2015-02-23

    We propose an algorithm based on fully computed holographic stereogram for calculating full-parallax computer-generated holograms (CGHs) with accurate depth cues. The proposed method integrates point source algorithm and holographic stereogram based algorithm to reconstruct the three-dimensional (3D) scenes. Precise accommodation cue and occlusion effect can be created, and computer graphics rendering techniques can be employed in the CGH generation to enhance the image fidelity. Optical experiments have been performed using a spatial light modulator (SLM) and a fabricated high-resolution hologram, the results show that our proposed algorithm can perform quality reconstructions of 3D scenes with arbitrary depth information. PMID:25836429

  4. Accurate Anharmonic IR Spectra from Integrated Cc/dft Approach

    NASA Astrophysics Data System (ADS)

    Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Carnimeo, Ivan; Puzzarini, Cristina

    2014-06-01

    The recent implementation of the computation of infrared (IR) intensities beyond the double harmonic approximation [1] paved the route to routine calculations of infrared spectra for a wide set of molecular systems. Contrary to common beliefs, second-order perturbation theory is able to deliver results of high accuracy provided that anharmonic resonances are properly managed [1,2]. It has been already shown for several small closed- and open shell molecular systems that the differences between coupled cluster (CC) and DFT anharmonic wavenumbers are mainly due to the harmonic terms, paving the route to introduce effective yet accurate hybrid CC/DFT schemes [2]. In this work we present that hybrid CC/DFT models can be applied also to the IR intensities leading to the simulation of highly accurate fully anharmonic IR spectra for medium-size molecules, including ones of atmospheric interest, showing in all cases good agreement with experiment even in the spectral ranges where non-fundamental transitions are predominant[3]. [1] J. Bloino and V. Barone, J. Chem. Phys. 136, 124108 (2012) [2] V. Barone, M. Biczysko, J. Bloino, Phys. Chem. Chem. Phys., 16, 1759-1787 (2014) [3] I. Carnimeo, C. Puzzarini, N. Tasinato, P. Stoppa, A. P. Charmet, M. Biczysko, C. Cappelli and V. Barone, J. Chem. Phys., 139, 074310 (2013)

  5. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  6. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  7. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  8. An accurate and computationally efficient model for membrane-type circular-symmetric micro-hotplates.

    PubMed

    Khan, Usman; Falconi, Christian

    2014-01-01

    Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214

  9. An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates

    PubMed Central

    Khan, Usman; Falconi, Christian

    2014-01-01

    Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214

  10. High-performance computing and networking as tools for accurate emission computed tomography reconstruction.

    PubMed

    Passeri, A; Formiconi, A R; De Cristofaro, M T; Pupi, A; Meldolesi, U

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64x64) slices could be reconstructed from a set of 90 (64x64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. PMID:9096089

  11. Approaches for the accurate definition of geological time boundaries

    NASA Astrophysics Data System (ADS)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  12. Special purpose hybrid transfinite elements and unified computational methodology for accurately predicting thermoelastic stress waves

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper represents an attempt to apply extensions of a hybrid transfinite element computational approach for accurately predicting thermoelastic stress waves. The applicability of the present formulations for capturing the thermal stress waves induced by boundary heating for the well known Danilovskaya problems is demonstrated. A unique feature of the proposed formulations for applicability to the Danilovskaya problem of thermal stress waves in elastic solids lies in the hybrid nature of the unified formulations and the development of special purpose transfinite elements in conjunction with the classical Galerkin techniques and transformation concepts. Numerical test cases validate the applicability and superior capability to capture the thermal stress waves induced due to boundary heating.

  13. A Machine Learning Approach for Accurate Annotation of Noncoding RNAs

    PubMed Central

    Liu, Chunmei; Wang, Zhi

    2016-01-01

    Searching genomes to locate noncoding RNA genes with known secondary structure is an important problem in bioinformatics. In general, the secondary structure of a searched noncoding RNA is defined with a structure model constructed from the structural alignment of a set of sequences from its family. Computing the optimal alignment between a sequence and a structure model is the core part of an algorithm that can search genomes for noncoding RNAs. In practice, a single structure model may not be sufficient to capture all crucial features important for a noncoding RNA family. In this paper, we develop a novel machine learning approach that can efficiently search genomes for noncoding RNAs with high accuracy. During the search procedure, a sequence segment in the searched genome sequence is processed and a feature vector is extracted to represent it. Based on the feature vector, a classifier is used to determine whether the sequence segment is the searched ncRNA or not. Our testing results show that this approach is able to efficiently capture crucial features of a noncoding RNA family. Compared with existing search tools, it significantly improves the accuracy of genome annotation. PMID:26357266

  14. Computing accurate age and distance factors in cosmology

    NASA Astrophysics Data System (ADS)

    Christiansen, Jodi L.; Siver, Andrew

    2012-05-01

    As the universe expands astronomical observables such as brightness and angular size on the sky change in ways that differ from our simple Cartesian expectation. We show how observed quantities depend on the expansion of space and demonstrate how to calculate such quantities using the Friedmann equations. The general solution to the Friedmann equations requires a numerical solution, which is easily coded in any computing language (including excel). We use these numerical calculations in four projects that help students build their understanding of high-redshift phenomena and cosmology. Instructions for these projects are available as supplementary materials.

  15. Parameter Estimation of Ion Current Formulations Requires Hybrid Optimization Approach to Be Both Accurate and Reliable

    PubMed Central

    Loewe, Axel; Wilhelms, Mathias; Schmid, Jochen; Krause, Mathias J.; Fischer, Fathima; Thomas, Dierk; Scholz, Eberhard P.; Dössel, Olaf; Seemann, Gunnar

    2016-01-01

    Computational models of cardiac electrophysiology provided insights into arrhythmogenesis and paved the way toward tailored therapies in the last years. To fully leverage in silico models in future research, these models need to be adapted to reflect pathologies, genetic alterations, or pharmacological effects, however. A common approach is to leave the structure of established models unaltered and estimate the values of a set of parameters. Today’s high-throughput patch clamp data acquisition methods require robust, unsupervised algorithms that estimate parameters both accurately and reliably. In this work, two classes of optimization approaches are evaluated: gradient-based trust-region-reflective and derivative-free particle swarm algorithms. Using synthetic input data and different ion current formulations from the Courtemanche et al. electrophysiological model of human atrial myocytes, we show that neither of the two schemes alone succeeds to meet all requirements. Sequential combination of the two algorithms did improve the performance to some extent but not satisfactorily. Thus, we propose a novel hybrid approach coupling the two algorithms in each iteration. This hybrid approach yielded very accurate estimates with minimal dependency on the initial guess using synthetic input data for which a ground truth parameter set exists. When applied to measured data, the hybrid approach yielded the best fit, again with minimal variation. Using the proposed algorithm, a single run is sufficient to estimate the parameters. The degree of superiority over the other investigated algorithms in terms of accuracy and robustness depended on the type of current. In contrast to the non-hybrid approaches, the proposed method proved to be optimal for data of arbitrary signal to noise ratio. The hybrid algorithm proposed in this work provides an important tool to integrate experimental data into computational models both accurately and robustly allowing to assess the often non

  16. An Approach for the Accurate Measurement of Social Morality Levels

    PubMed Central

    Liu, Haiyan; Chen, Xia; Zhang, Bo

    2013-01-01

    In the social sciences, computer-based modeling has become an increasingly important tool receiving widespread attention. However, the derivation of the quantitative relationships linking individual moral behavior and social morality levels, so as to provide a useful basis for social policy-making, remains a challenge in the scholarly literature today. A quantitative measurement of morality from the perspective of complexity science constitutes an innovative attempt. Based on the NetLogo platform, this article examines the effect of various factors on social morality levels, using agents modeling moral behavior, immoral behavior, and a range of environmental social resources. Threshold values for the various parameters are obtained through sensitivity analysis; and practical solutions are proposed for reversing declines in social morality levels. The results show that: (1) Population size may accelerate or impede the speed with which immoral behavior comes to determine the overall level of social morality, but it has no effect on the level of social morality itself; (2) The impact of rewards and punishment on social morality levels follows the “5∶1 rewards-to-punishment rule,” which is to say that 5 units of rewards have the same effect as 1 unit of punishment; (3) The abundance of public resources is inversely related to the level of social morality; (4) When the cost of population mobility reaches 10% of the total energy level, immoral behavior begins to be suppressed (i.e. the 1/10 moral cost rule). The research approach and methods presented in this paper successfully address the difficulties involved in measuring social morality levels, and promise extensive application potentials. PMID:24312189

  17. Groundtruth approach to accurate quantitation of fluorescence microarrays

    SciTech Connect

    Mascio-Kegelmeyer, L; Tomascik-Cheeseman, L; Burnett, M S; van Hummelen, P; Wyrobek, A J

    2000-12-01

    To more accurately measure fluorescent signals from microarrays, we calibrated our acquisition and analysis systems by using groundtruth samples comprised of known quantities of red and green gene-specific DNA probes hybridized to cDNA targets. We imaged the slides with a full-field, white light CCD imager and analyzed them with our custom analysis software. Here we compare, for multiple genes, results obtained with and without preprocessing (alignment, color crosstalk compensation, dark field subtraction, and integration time). We also evaluate the accuracy of various image processing and analysis techniques (background subtraction, segmentation, quantitation and normalization). This methodology calibrates and validates our system for accurate quantitative measurement of microarrays. Specifically, we show that preprocessing the images produces results significantly closer to the known ground-truth for these samples.

  18. CoMOGrad and PHOG: From Computer Vision to Fast and Accurate Protein Tertiary Structure Retrieval.

    PubMed

    Karim, Rezaul; Aziz, Mohd Momin Al; Shatabda, Swakkhar; Rahman, M Sohel; Mia, Md Abul Kashem; Zaman, Farhana; Rakin, Salman

    2015-01-01

    The number of entries in a structural database of proteins is increasing day by day. Methods for retrieving protein tertiary structures from such a large database have turn out to be the key to comparative analysis of structures that plays an important role to understand proteins and their functions. In this paper, we present fast and accurate methods for the retrieval of proteins having tertiary structures similar to a query protein from a large database. Our proposed methods borrow ideas from the field of computer vision. The speed and accuracy of our methods come from the two newly introduced features- the co-occurrence matrix of the oriented gradient and pyramid histogram of oriented gradient- and the use of Euclidean distance as the distance measure. Experimental results clearly indicate the superiority of our approach in both running time and accuracy. Our method is readily available for use from this website: http://research.buet.ac.bd:8080/Comograd/. PMID:26293226

  19. CoMOGrad and PHOG: From Computer Vision to Fast and Accurate Protein Tertiary Structure Retrieval

    PubMed Central

    Karim, Rezaul; Aziz, Mohd. Momin Al; Shatabda, Swakkhar; Rahman, M. Sohel; Mia, Md. Abul Kashem; Zaman, Farhana; Rakin, Salman

    2015-01-01

    The number of entries in a structural database of proteins is increasing day by day. Methods for retrieving protein tertiary structures from such a large database have turn out to be the key to comparative analysis of structures that plays an important role to understand proteins and their functions. In this paper, we present fast and accurate methods for the retrieval of proteins having tertiary structures similar to a query protein from a large database. Our proposed methods borrow ideas from the field of computer vision. The speed and accuracy of our methods come from the two newly introduced features- the co-occurrence matrix of the oriented gradient and pyramid histogram of oriented gradient- and the use of Euclidean distance as the distance measure. Experimental results clearly indicate the superiority of our approach in both running time and accuracy. Our method is readily available for use from this website: http://research.buet.ac.bd:8080/Comograd/. PMID:26293226

  20. Development of highly accurate approximate scheme for computing the charge transfer integral

    SciTech Connect

    Pershin, Anton; Szalay, Péter G.

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  1. Development of highly accurate approximate scheme for computing the charge transfer integral.

    PubMed

    Pershin, Anton; Szalay, Péter G

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the "exact" scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the "exact" calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature. PMID:26298117

  2. Accurate 3-D finite difference computation of traveltimes in strongly heterogeneous media

    NASA Astrophysics Data System (ADS)

    Noble, M.; Gesret, A.; Belayouni, N.

    2014-12-01

    Seismic traveltimes and their spatial derivatives are the basis of many imaging methods such as pre-stack depth migration and tomography. A common approach to compute these quantities is to solve the eikonal equation with a finite-difference scheme. If many recently published algorithms for resolving the eikonal equation do now yield fairly accurate traveltimes for most applications, the spatial derivatives of traveltimes remain very approximate. To address this accuracy issue, we develop a new hybrid eikonal solver that combines a spherical approximation when close to the source and a plane wave approximation when far away. This algorithm reproduces properly the spherical behaviour of wave fronts in the vicinity of the source. We implement a combination of 16 local operators that enables us to handle velocity models with sharp vertical and horizontal velocity contrasts. We associate to these local operators a global fast sweeping method to take into account all possible directions of wave propagation. Our formulation allows us to introduce a variable grid spacing in all three directions of space. We demonstrate the efficiency of this algorithm in terms of computational time and the gain in accuracy of the computed traveltimes and their derivatives on several numerical examples.

  3. Computationally efficient and accurate enantioselectivity modeling by clusters of molecular dynamics simulations.

    PubMed

    Wijma, Hein J; Marrink, Siewert J; Janssen, Dick B

    2014-07-28

    Computational approaches could decrease the need for the laborious high-throughput experimental screening that is often required to improve enzymes by mutagenesis. Here, we report that using multiple short molecular dynamics (MD) simulations makes it possible to accurately model enantioselectivity for large numbers of enzyme-substrate combinations at low computational costs. We chose four different haloalkane dehalogenases as model systems because of the availability of a large set of experimental data on the enantioselective conversion of 45 different substrates. To model the enantioselectivity, we quantified the frequency of occurrence of catalytically productive conformations (near attack conformations) for pairs of enantiomers during MD simulations. We found that the angle of nucleophilic attack that leads to carbon-halogen bond cleavage was a critical variable that limited the occurrence of productive conformations; enantiomers for which this angle reached values close to 180° were preferentially converted. A cluster of 20-40 very short (10 ps) MD simulations allowed adequate conformational sampling and resulted in much better agreement to experimental enantioselectivities than single long MD simulations (22 ns), while the computational costs were 50-100 fold lower. With single long MD simulations, the dynamics of enzyme-substrate complexes remained confined to a conformational subspace that rarely changed significantly, whereas with multiple short MD simulations a larger diversity of conformations of enzyme-substrate complexes was observed. PMID:24916632

  4. Equilibrium gas flow computations. I - Accurate and efficient calculation of equilibrium gas properties

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel

    1989-01-01

    This paper treats the accurate and efficient calculation of thermodynamic properties of arbitrary gas mixtures for equilibrium flow computations. New improvements in the Stupochenko-Jaffe model for the calculation of thermodynamic properties of diatomic molecules are presented. A unified formulation of equilibrium calculations for gas mixtures in terms of irreversible entropy is given. Using a highly accurate thermo-chemical data base, a new, efficient and vectorizable search algorithm is used to construct piecewise interpolation procedures with generate accurate thermodynamic variable and their derivatives required by modern computational algorithms. Results are presented for equilibrium air, and compared with those given by the Srinivasan program.

  5. RICO: A New Approach for Fast and Accurate Representation of the Cosmological Recombination History

    NASA Astrophysics Data System (ADS)

    Fendt, W. A.; Chluba, J.; Rubiño-Martín, J. A.; Wandelt, B. D.

    2009-04-01

    We present RICO, a code designed to compute the ionization fraction of the universe during the epoch of hydrogen and helium recombination with an unprecedented combination of speed and accuracy. This is accomplished by training the machine learning code PICO on the calculations of a multilevel cosmological recombination code which self-consistently includes several physical processes that were neglected previously. After training, RICO is used to fit the free electron fraction as a function of the cosmological parameters. While, for example, at low redshifts (z lsim 900), much of the net change in the ionization fraction can be captured by lowering the hydrogen fudge factor in RECFAST by about 3%, RICO provides a means of effectively using the accurate ionization history of the full recombination code in the standard cosmological parameter estimation framework without the need to add new or refined fudge factors or functions to a simple recombination model. Within the new approach presented here, it is easy to update RICO whenever a more accurate full recombination code becomes available. Once trained, RICO computes the cosmological ionization history with negligible fitting error in ~10 ms, a speedup of at least 106 over the full recombination code that was used here. Also RICO is able to reproduce the ionization history of the full code to a level well below 0.1%, thereby ensuring that the theoretical power spectra of cosmic microwave background (CMB) fluctuations can be computed to sufficient accuracy and speed for analysis from upcoming CMB experiments like Planck. Furthermore, it will enable cross-checking different recombination codes across cosmological parameter space, a comparison that will be very important in order to assure the accurate interpretation of future CMB data.

  6. Towards the computations of accurate spectroscopic parameters and vibrational spectra for organic compounds

    NASA Astrophysics Data System (ADS)

    Hochlaf, M.; Puzzarini, C.; Senent, M. L.

    2015-07-01

    We present multi-component computations for rotational constants, vibrational and torsional levels of medium-sized molecules. Through the treatment of two organic sulphur molecules, ethyl mercaptan and dimethyl sulphide, which are relevant for atmospheric and astrophysical media, we point out the outstanding capabilities of explicitly correlated coupled clusters (CCSD(T)-F12) method in conjunction with the cc-pVTZ-F12 basis set for the accurate predictions of such quantities. Indeed, we show that the CCSD(T)-F12/cc-pVTZ-F12 equilibrium rotational constants are in good agreement with those obtained by means of a composite scheme based on CCSD(T) calculations that accounts for the extrapolation to the complete basis set (CBS) limit and core-correlation effects [CCSD(T)/CBS+CV], thus leading to values of ground-state rotational constants rather close to the corresponding experimental data. For vibrational and torsional levels, our analysis reveals that the anharmonic frequencies derived from CCSD(T)-F12/cc-pVTZ-F12 harmonic frequencies and anharmonic corrections (Δν = ω - ν) at the CCSD/cc-pVTZ level closely agree with experimental results. The pattern of the torsional transitions and the shape of the potential energy surfaces along the torsional modes are also well reproduced using the CCSD(T)-F12/cc-pVTZ-F12 energies. Interestingly, this good accuracy is accompanied with a strong reduction of the computational costs. This makes the procedures proposed here as schemes of choice for effective and accurate prediction of spectroscopic properties of organic compounds. Finally, popular density functional approaches are compared with the coupled cluster (CC) methodologies in torsional studies. The long-range CAM-B3LYP functional of Handy and co-workers is recommended for large systems.

  7. Parallel Higher-order Finite Element Method for Accurate Field Computations in Wakefield and PIC Simulations

    SciTech Connect

    Candel, A.; Kabel, A.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.; Ko, K.; /SLAC

    2009-06-19

    Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell) approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.

  8. The Clinical Impact of Accurate Cystine Calculi Characterization Using Dual-Energy Computed Tomography

    PubMed Central

    Haley, William E.; Ibrahim, El-Sayed H.; Qu, Mingliang; Cernigliaro, Joseph G.; Goldfarb, David S.; McCollough, Cynthia H.

    2015-01-01

    Dual-energy computed tomography (DECT) has recently been suggested as the imaging modality of choice for kidney stones due to its ability to provide information on stone composition. Standard postprocessing of the dual-energy images accurately identifies uric acid stones, but not other types. Cystine stones can be identified from DECT images when analyzed with advanced postprocessing. This case report describes clinical implications of accurate diagnosis of cystine stones using DECT. PMID:26688770

  9. The Clinical Impact of Accurate Cystine Calculi Characterization Using Dual-Energy Computed Tomography.

    PubMed

    Haley, William E; Ibrahim, El-Sayed H; Qu, Mingliang; Cernigliaro, Joseph G; Goldfarb, David S; McCollough, Cynthia H

    2015-01-01

    Dual-energy computed tomography (DECT) has recently been suggested as the imaging modality of choice for kidney stones due to its ability to provide information on stone composition. Standard postprocessing of the dual-energy images accurately identifies uric acid stones, but not other types. Cystine stones can be identified from DECT images when analyzed with advanced postprocessing. This case report describes clinical implications of accurate diagnosis of cystine stones using DECT. PMID:26688770

  10. Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images

    NASA Technical Reports Server (NTRS)

    Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.

    1999-01-01

    Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.

  11. Computational approaches for predicting mutant protein stability.

    PubMed

    Kulshreshtha, Shweta; Chaudhary, Vigi; Goswami, Girish K; Mathur, Nidhi

    2016-05-01

    Mutations in the protein affect not only the structure of protein, but also its function and stability. Prediction of mutant protein stability with accuracy is desired for uncovering the molecular aspects of diseases and design of novel proteins. Many advanced computational approaches have been developed over the years, to predict the stability and function of a mutated protein. These approaches based on structure, sequence features and combined features (both structure and sequence features) provide reasonably accurate estimation of the impact of amino acid substitution on stability and function of protein. Recently, consensus tools have been developed by incorporating many tools together, which provide single window results for comparison purpose. In this review, a useful guide for the selection of tools that can be employed in predicting mutated proteins' stability and disease causing capability is provided. PMID:27160393

  12. A hybrid approach for rapid, accurate, and direct kilovoltage radiation dose calculations in CT voxel space

    SciTech Connect

    Kouznetsov, Alexei; Tambasco, Mauro

    2011-03-15

    Purpose: To develop and validate a fast and accurate method that uses computed tomography (CT) voxel data to estimate absorbed radiation dose at a point of interest (POI) or series of POIs from a kilovoltage (kV) imaging procedure. Methods: The authors developed an approach that computes absorbed radiation dose at a POI by numerically evaluating the linear Boltzmann transport equation (LBTE) using a combination of deterministic and Monte Carlo (MC) techniques. This hybrid approach accounts for material heterogeneity with a level of accuracy comparable to the general MC algorithms. Also, the dose at a POI is computed within seconds using the Intel Core i7 CPU 920 2.67 GHz quad core architecture, and the calculations are performed using CT voxel data, making it flexible and feasible for clinical applications. To validate the method, the authors constructed and acquired a CT scan of a heterogeneous block phantom consisting of a succession of slab densities: Tissue (1.29 cm), bone (2.42 cm), lung (4.84 cm), bone (1.37 cm), and tissue (4.84 cm). Using the hybrid transport method, the authors computed the absorbed doses at a set of points along the central axis and x direction of the phantom for an isotropic 125 kVp photon spectral point source located along the central axis 92.7 cm above the phantom surface. The accuracy of the results was compared to those computed with MCNP, which was cross-validated with EGSnrc, and served as the benchmark for validation. Results: The error in the depth dose ranged from -1.45% to +1.39% with a mean and standard deviation of -0.12% and 0.66%, respectively. The error in the x profile ranged from -1.3% to +0.9%, with standard deviations of -0.3% and 0.5%, respectively. The number of photons required to achieve these results was 1x10{sup 6}. Conclusions: The voxel-based hybrid method evaluates the LBTE rapidly and accurately to estimate the absorbed x-ray dose at any POI or series of POIs from a kV imaging procedure.

  13. Aeroacoustic Flow Phenomena Accurately Captured by New Computational Fluid Dynamics Method

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.

    2002-01-01

    One of the challenges in the computational fluid dynamics area is the accurate calculation of aeroacoustic phenomena, especially in the presence of shock waves. One such phenomenon is "transonic resonance," where an unsteady shock wave at the throat of a convergent-divergent nozzle results in the emission of acoustic tones. The space-time Conservation-Element and Solution-Element (CE/SE) method developed at the NASA Glenn Research Center can faithfully capture the shock waves, their unsteady motion, and the generated acoustic tones. The CE/SE method is a revolutionary new approach to the numerical modeling of physical phenomena where features with steep gradients (e.g., shock waves, phase transition, etc.) must coexist with those having weaker variations. The CE/SE method does not require the complex interpolation procedures (that allow for the possibility of a shock between grid cells) used by many other methods to transfer information between grid cells. These interpolation procedures can add too much numerical dissipation to the solution process. Thus, while shocks are resolved, weaker waves, such as acoustic waves, are washed out.

  14. Accurate charge capture and cost allocation: cost justification for bedside computing.

    PubMed Central

    Grewal, R.; Reed, R. L.

    1993-01-01

    This paper shows that cost justification for bedside clinical computing can be made by recouping charges with accurate charge capture. Twelve months worth of professional charges for a sixteen bed surgical intensive care unit are computed from charted data in a bedside clinical database and are compared to the professional charges actually billed by the unit. A substantial difference in predicted charges and billed charges was found. This paper also discusses the concept of appropriate cost allocation in the inpatient environment and the feasibility of appropriate allocation as a by-product of bedside computing. PMID:8130444

  15. Computational Approaches to Vestibular Research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    The Biocomputation Center at NASA Ames Research Center is dedicated to a union between computational, experimental and theoretical approaches to the study of neuroscience and of life sciences in general. The current emphasis is on computer reconstruction and visualization of vestibular macular architecture in three-dimensions (3-D), and on mathematical modeling and computer simulation of neural activity in the functioning system. Our methods are being used to interpret the influence of spaceflight on mammalian vestibular maculas in a model system, that of the adult Sprague-Dawley rat. More than twenty 3-D reconstructions of type I and type II hair cells and their afferents have been completed by digitization of contours traced from serial sections photographed in a transmission electron microscope. This labor-intensive method has now been replace d by a semiautomated method developed in the Biocomputation Center in which conventional photography is eliminated. All viewing, storage and manipulation of original data is done using Silicon Graphics workstations. Recent improvements to the software include a new mesh generation method for connecting contours. This method will permit the investigator to describe any surface, regardless of complexity, including highly branched structures such as are routinely found in neurons. This same mesh can be used for 3-D, finite volume simulation of synapse activation and voltage spread on neuronal surfaces visualized via the reconstruction process. These simulations help the investigator interpret the relationship between neuroarchitecture and physiology, and are of assistance in determining which experiments will best test theoretical interpretations. Data are also used to develop abstract, 3-D models that dynamically display neuronal activity ongoing in the system. Finally, the same data can be used to visualize the neural tissue in a virtual environment. Our exhibit will depict capabilities of our computational approaches and

  16. Fuzzy multiple linear regression: A computational approach

    NASA Technical Reports Server (NTRS)

    Juang, C. H.; Huang, X. H.; Fleming, J. W.

    1992-01-01

    This paper presents a new computational approach for performing fuzzy regression. In contrast to Bardossy's approach, the new approach, while dealing with fuzzy variables, closely follows the conventional regression technique. In this approach, treatment of fuzzy input is more 'computational' than 'symbolic.' The following sections first outline the formulation of the new approach, then deal with the implementation and computational scheme, and this is followed by examples to illustrate the new procedure.

  17. Redundancy approaches in spacecraft computers

    NASA Astrophysics Data System (ADS)

    Schonfeld, Chaim

    Twelve redundancy techniques for spacecraft computers are analyzed. The redundancy schemes include: a single unit; two active units; triple modular redundancy; NMR; a single unit with one and two spares; two units with one, two, and three spares; triple units with one and two spares; and a single unit with a spare per module; the basic properties of these schemes are described. The reliability of each scheme is evaluated as a function of the reliability of a single unit. The redundancy schemes are compared in terms of reliability, the number of failures the system can tolerate, coverage, recovery time, and mean time between failure improvement. The error detection and recovery systems and the random access memory redundancy of the schemes are examined. The data reveal that the single unit with a spare per module is the most effective redundancy approach; a description of the scheme is provided.

  18. Construction of feasible and accurate kinetic models of metabolism: A Bayesian approach

    PubMed Central

    Saa, Pedro A.; Nielsen, Lars K.

    2016-01-01

    Kinetic models are essential to quantitatively understand and predict the behaviour of metabolic networks. Detailed and thermodynamically feasible kinetic models of metabolism are inherently difficult to formulate and fit. They have a large number of heterogeneous parameters, are non-linear and have complex interactions. Many powerful fitting strategies are ruled out by the intractability of the likelihood function. Here, we have developed a computational framework capable of fitting feasible and accurate kinetic models using Approximate Bayesian Computation. This framework readily supports advanced modelling features such as model selection and model-based experimental design. We illustrate this approach on the tightly-regulated mammalian methionine cycle. Sampling from the posterior distribution, the proposed framework generated thermodynamically feasible parameter samples that converged on the true values, and displayed remarkable prediction accuracy in several validation tests. Furthermore, a posteriori analysis of the parameter distributions enabled appraisal of the systems properties of the network (e.g., control structure) and key metabolic regulations. Finally, the framework was used to predict missing allosteric interactions. PMID:27417285

  19. Computer-based personality judgments are more accurate than those made by humans

    PubMed Central

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-01

    Judging others’ personalities is an essential skill in successful social living, as personality is a key driver behind people’s interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants’ Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy. PMID:25583507

  20. Computer-based personality judgments are more accurate than those made by humans.

    PubMed

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-27

    Judging others' personalities is an essential skill in successful social living, as personality is a key driver behind people's interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants' Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy. PMID:25583507

  1. A Unified Methodology for Computing Accurate Quaternion Color Moments and Moment Invariants.

    PubMed

    Karakasis, Evangelos G; Papakostas, George A; Koulouriotis, Dimitrios E; Tourassis, Vassilios D

    2014-02-01

    In this paper, a general framework for computing accurate quaternion color moments and their corresponding invariants is proposed. The proposed unified scheme arose by studying the characteristics of different orthogonal polynomials. These polynomials are used as kernels in order to form moments, the invariants of which can easily be derived. The resulted scheme permits the usage of any polynomial-like kernel in a unified and consistent way. The resulted moments and moment invariants demonstrate robustness to noisy conditions and high discriminative power. Additionally, in the case of continuous moments, accurate computations take place to avoid approximation errors. Based on this general methodology, the quaternion Tchebichef, Krawtchouk, Dual Hahn, Legendre, orthogonal Fourier-Mellin, pseudo Zernike and Zernike color moments, and their corresponding invariants are introduced. A selected paradigm presents the reconstruction capability of each moment family, whereas proper classification scenarios evaluate the performance of color moment invariants. PMID:24216719

  2. Accurate computation of Stokes flow driven by an open immersed interface

    NASA Astrophysics Data System (ADS)

    Li, Yi; Layton, Anita T.

    2012-06-01

    We present numerical methods for computing two-dimensional Stokes flow driven by forces singularly supported along an open, immersed interface. Two second-order accurate methods are developed: one for accurately evaluating boundary integral solutions at a point, and another for computing Stokes solution values on a rectangular mesh. We first describe a method for computing singular or nearly singular integrals, such as a double layer potential due to sources on a curve in the plane, evaluated at a point on or near the curve. To improve accuracy of the numerical quadrature, we add corrections for the errors arising from discretization, which are found by asymptotic analysis. When used to solve the Stokes equations with sources on an open, immersed interface, the method generates second-order approximations, for both the pressure and the velocity, and preserves the jumps in the solutions and their derivatives across the boundary. We then combine the method with a mesh-based solver to yield a hybrid method for computing Stokes solutions at N2 grid points on a rectangular grid. Numerical results are presented which exhibit second-order accuracy. To demonstrate the applicability of the method, we use the method to simulate fluid dynamics induced by the beating motion of a cilium. The method preserves the sharp jumps in the Stokes solution and their derivatives across the immersed boundary. Model results illustrate the distinct hydrodynamic effects generated by the effective stroke and by the recovery stroke of the ciliary beat cycle.

  3. Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method.

    PubMed

    Zhao, Yan; Cao, Liangcai; Zhang, Hao; Kong, Dezhao; Jin, Guofan

    2015-10-01

    Fast calculation and correct depth cue are crucial issues in the calculation of computer-generated hologram (CGH) for high quality three-dimensional (3-D) display. An angular-spectrum based algorithm for layer-oriented CGH is proposed. Angular spectra from each layer are synthesized as a layer-corresponded sub-hologram based on the fast Fourier transform without paraxial approximation. The proposed method can avoid the huge computational cost of the point-oriented method and yield accurate predictions of the whole diffracted field compared with other layer-oriented methods. CGHs of versatile formats of 3-D digital scenes, including computed tomography and 3-D digital models, are demonstrated with precise depth performance and advanced image quality. PMID:26480062

  4. An accurate and efficient computation method of the hydration free energy of a large, complex molecule

    NASA Astrophysics Data System (ADS)

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-01

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of /2 ( is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.

  5. Structural stability augmentation system design using BODEDIRECT: A quick and accurate approach

    NASA Technical Reports Server (NTRS)

    Goslin, T. J.; Ho, J. K.

    1989-01-01

    A methodology is presented for a modal suppression control law design using flight test data instead of mathematical models to obtain the required gain and phase information about the flexible airplane. This approach is referred to as BODEDIRECT. The purpose of the BODEDIRECT program is to provide a method of analyzing the modal phase relationships measured directly from the airplane. These measurements can be achieved with a frequency sweep at the control surface input while measuring the outputs of interest. The measured Bode-models can be used directly for analysis in the frequency domain, and for control law design. Besides providing a more accurate representation for the system inputs and outputs of interest, this method is quick and relatively inexpensive. To date, the BODEDIRECT program has been tested and verified for computational integrity. Its capabilities include calculation of series, parallel and loop closure connections between Bode-model representations. System PSD, together with gain and phase margins of stability may be calculated for successive loop closures of multi-input/multi-output systems. Current plans include extensive flight testing to obtain a Bode-model representation of a commercial aircraft for design of a structural stability augmentation system.

  6. New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation.

    PubMed

    Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Kim, Seonghoon; Sohn, Jason W

    2015-02-21

    In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient's 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ~40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry. PMID:25615567

  7. Time accurate application of the MacCormack 2-4 scheme on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Hudson, Dale A.; Long, Lyle N.

    1995-01-01

    Many recent computational efforts in turbulence and acoustics research have used higher order numerical algorithms. One popular method has been the explicit MacCormack 2-4 scheme. The MacCormack 2-4 scheme is second order accurate in time and fourth order accurate in space, and is stable for CFL's below 2/3. Current research has shown that the method can give accurate results but does exhibit significant Gibbs phenomena at sharp discontinuities. The impact of adding Jameson type second, third, and fourth order artificial viscosity was examined here. Category 2 problems, the nonlinear traveling wave and the Riemann problem, were computed using a CFL number of 0.25. This research has found that dispersion errors can be significantly reduced or nearly eliminated by using a combination of second and third order terms in the damping. Use of second and fourth order terms reduced the magnitude of dispersion errors but not as effectively as the second and third order combination. The program was coded using Thinking Machine's CM Fortran, a variant of Fortran 90/High Performance Fortran, and was executed on a 2K CM-200. Simple extrapolation boundary conditions were used for both problems.

  8. Palm computer demonstrates a fast and accurate means of burn data collection.

    PubMed

    Lal, S O; Smith, F W; Davis, J P; Castro, H Y; Smith, D W; Chinkes, D L; Barrow, R E

    2000-01-01

    Manual biomedical data collection and entry of the data into a personal computer is time-consuming and can be prone to errors. The purpose of this study was to compare data entry into a hand-held computer versus hand written data followed by entry of the data into a personal computer. A Palm (3Com Palm IIIx, Santa, Clara, Calif) computer with a custom menu-driven program was used for the entry and retrieval of burn-related variables. These variables were also used to create an identical sheet that was filled in by hand. Identical data were retrieved twice from 110 charts 48 hours apart and then used to create an Excel (Microsoft, Redmond, Wash) spreadsheet. One time data were recorded by the Palm entry method, and the other time the data were handwritten. The method of retrieval was alternated between the Palm system and handwritten system every 10 charts. The total time required to log data and to generate an Excel spreadsheet was recorded and used as a study endpoint. The total time for the Palm method of data collection and downloading to a personal computer was 23% faster than hand recording with the personal computer entry method (P < 0.05), and 58% fewer errors were generated with the Palm method.) The Palm is a faster and more accurate means of data collection than a handwritten technique. PMID:11194811

  9. Novel electromagnetic surface integral equations for highly accurate computations of dielectric bodies with arbitrarily low contrasts

    SciTech Connect

    Erguel, Ozguer; Guerel, Levent

    2008-12-01

    We present a novel stabilization procedure for accurate surface formulations of electromagnetic scattering problems involving three-dimensional dielectric objects with arbitrarily low contrasts. Conventional surface integral equations provide inaccurate results for the scattered fields when the contrast of the object is low, i.e., when the electromagnetic material parameters of the scatterer and the host medium are close to each other. We propose a stabilization procedure involving the extraction of nonradiating currents and rearrangement of the right-hand side of the equations using fictitious incident fields. Then, only the radiating currents are solved to calculate the scattered fields accurately. This technique can easily be applied to the existing implementations of conventional formulations, it requires negligible extra computational cost, and it is also appropriate for the solution of large problems with the multilevel fast multipole algorithm. We show that the stabilization leads to robust formulations that are valid even for the solutions of extremely low-contrast objects.

  10. An accurate quadrature technique for the contact boundary in 3D finite element computations

    NASA Astrophysics Data System (ADS)

    Duong, Thang X.; Sauer, Roger A.

    2015-01-01

    This paper presents a new numerical integration technique for 3D contact finite element implementations, focusing on a remedy for the inaccurate integration due to discontinuities at the boundary of contact surfaces. The method is based on the adaptive refinement of the integration domain along the boundary of the contact surface, and is accordingly denoted RBQ for refined boundary quadrature. It can be used for common element types of any order, e.g. Lagrange, NURBS, or T-Spline elements. In terms of both computational speed and accuracy, RBQ exhibits great advantages over a naive increase of the number of quadrature points. Also, the RBQ method is shown to remain accurate for large deformations. Furthermore, since the sharp boundary of the contact surface is determined, it can be used for various purposes like the accurate post-processing of the contact pressure. Several examples are presented to illustrate the new technique.

  11. Machine Computation; An Algorithmic Approach.

    ERIC Educational Resources Information Center

    Gonzalez, Richard F.; McMillan, Claude, Jr.

    Designed for undergraduate social science students, this textbook concentrates on using the computer in a straightforward way to manipulate numbers and variables in order to solve problems. The text is problem oriented and assumes that the student has had little prior experience with either a computer or programing languages. An introduction to…

  12. A particle-tracking approach for accurate material derivative measurements with tomographic PIV

    NASA Astrophysics Data System (ADS)

    Novara, Matteo; Scarano, Fulvio

    2013-08-01

    The evaluation of the instantaneous 3D pressure field from tomographic PIV data relies on the accurate estimate of the fluid velocity material derivative, i.e., the velocity time rate of change following a given fluid element. To date, techniques that reconstruct the fluid parcel trajectory from a time sequence of 3D velocity fields obtained with Tomo-PIV have already been introduced. However, an accurate evaluation of the fluid element acceleration requires trajectory reconstruction over a relatively long observation time, which reduces random errors. On the other hand, simple integration and finite difference techniques suffer from increasing truncation errors when complex trajectories need to be reconstructed over a long time interval. In principle, particle-tracking velocimetry techniques (3D-PTV) enable the accurate reconstruction of single particle trajectories over a long observation time. Nevertheless, PTV can be reliably performed only at limited particle image number density due to errors caused by overlapping particles. The particle image density can be substantially increased by use of tomographic PIV. In the present study, a technique to combine the higher information density of tomographic PIV and the accurate trajectory reconstruction of PTV is proposed (Tomo-3D-PTV). The particle-tracking algorithm is applied to the tracers detected in the 3D domain obtained by tomographic reconstruction. The 3D particle information is highly sparse and intersection of trajectories is virtually impossible. As a result, ambiguities in the particle path identification over subsequent recordings are easily avoided. Polynomial fitting functions are introduced that describe the particle position in time with sequences based on several recordings, leading to the reduction in truncation errors for complex trajectories. Moreover, the polynomial regression approach provides a reduction in the random errors due to the particle position measurement. Finally, the acceleration

  13. A novel approach for latent print identification using accurate overlays to prioritize reference prints.

    PubMed

    Gantz, Daniel T; Gantz, Donald T; Walch, Mark A; Roberts, Maria Antonia; Buscaglia, JoAnn

    2014-12-01

    A novel approach to automated fingerprint matching and scoring that produces accurate locally and nonlinearly adjusted overlays of a latent print onto each reference print in a corpus is described. The technology, which addresses challenges inherent to latent prints, provides the latent print examiner with a prioritized ranking of candidate reference prints based on the overlays of the latent onto each candidate print. In addition to supporting current latent print comparison practices, this approach can make it possible to return a greater number of AFIS candidate prints because the ranked overlays provide a substantial starting point for latent-to-reference print comparison. To provide the image information required to create an accurate overlay of a latent print onto a reference print, "Ridge-Specific Markers" (RSMs), which correspond to short continuous segments of a ridge or furrow, are introduced. RSMs are reliably associated with any specific local section of a ridge or a furrow using the geometric information available from the image. Latent prints are commonly fragmentary, with reduced clarity and limited minutiae (i.e., ridge endings and bifurcations). Even in the absence of traditional minutiae, latent prints contain very important information in their ridges that permit automated matching using RSMs. No print orientation or information beyond the RSMs is required to generate the overlays. This automated process is applied to the 88 good quality latent prints in the NIST Special Database (SD) 27. Nonlinear overlays of each latent were produced onto all of the 88 reference prints in the NIST SD27. With fully automated processing, the true mate reference prints were ranked in the first candidate position for 80.7% of the latents tested, and 89.8% of the true mate reference prints ranked in the top ten positions. After manual post-processing of those latents for which the true mate reference print was not ranked first, these frequencies increased to 90

  14. An accurate Fortran code for computing hydrogenic continuum wave functions at a wide range of parameters

    NASA Astrophysics Data System (ADS)

    Peng, Liang-You; Gong, Qihuang

    2010-12-01

    The accurate computations of hydrogenic continuum wave functions are very important in many branches of physics such as electron-atom collisions, cold atom physics, and atomic ionization in strong laser fields, etc. Although there already exist various algorithms and codes, most of them are only reliable in a certain ranges of parameters. In some practical applications, accurate continuum wave functions need to be calculated at extremely low energies, large radial distances and/or large angular momentum number. Here we provide such a code, which can generate accurate hydrogenic continuum wave functions and corresponding Coulomb phase shifts at a wide range of parameters. Without any essential restrict to angular momentum number, the present code is able to give reliable results at the electron energy range [10,10] eV for radial distances of [10,10] a.u. We also find the present code is very efficient, which should find numerous applications in many fields such as strong field physics. Program summaryProgram title: HContinuumGautchi Catalogue identifier: AEHD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1233 No. of bytes in distributed program, including test data, etc.: 7405 Distribution format: tar.gz Programming language: Fortran90 in fixed format Computer: AMD Processors Operating system: Linux RAM: 20 MBytes Classification: 2.7, 4.5 Nature of problem: The accurate computation of atomic continuum wave functions is very important in many research fields such as strong field physics and cold atom physics. Although there have already existed various algorithms and codes, most of them can only be applicable and reliable in a certain range of parameters. We present here an accurate FORTRAN program for

  15. Towards fast and accurate algorithms for processing fuzzy data: interval computations revisited

    NASA Astrophysics Data System (ADS)

    Xiang, Gang; Kreinovich, Vladik

    2013-02-01

    In many practical applications, we need to process data, e.g. to predict the future values of different quantities based on their current values. Often, the only information that we have about the current values comes from experts, and is described in informal ('fuzzy') terms like 'small'. To process such data, it is natural to use fuzzy techniques, techniques specifically designed by Lotfi Zadeh to handle such informal information. In this survey, we start by revisiting the motivation behind Zadeh's formulae for processing fuzzy data, and explain how the algorithmic problem of processing fuzzy data can be described in terms of interval computations (α-cuts). Many fuzzy practitioners claim 'I tried interval computations, they did not work' - meaning that they got estimates which are much wider than the desired α-cuts. We show that such statements are usually based on a (widely spread) misunderstanding - that interval computations simply mean replacing each arithmetic operation with the corresponding operation with intervals. We show that while such straightforward interval techniques indeed often lead to over-wide estimates, the current advanced interval computations techniques result in estimates which are much more accurate. We overview such advanced interval computations techniques, and show that by using them, we can efficiently and accurately process fuzzy data. We wrote this survey with three audiences in mind. First, we want fuzzy researchers and practitioners to understand the current advanced interval computations techniques and to use them to come up with faster and more accurate algorithms for processing fuzzy data. For this 'fuzzy' audience, we explain these current techniques in detail. Second, we also want interval researchers to better understand this important application area for their techniques. For this 'interval' audience, we want to explain where fuzzy techniques come from, what are possible variants of these techniques, and what are the

  16. Computer Algebra, Instrumentation and the Anthropological Approach

    ERIC Educational Resources Information Center

    Monaghan, John

    2007-01-01

    This article considers research and scholarship on the use of computer algebra in mathematics education following the instrumentation and the anthropological approaches. It outlines what these approaches are, positions them with regard to other approaches, examines tensions between the two approaches and makes suggestions for how work in this…

  17. Accurate methods for computing inviscid and viscous Kelvin-Helmholtz instability

    NASA Astrophysics Data System (ADS)

    Chen, Michael J.; Forbes, Lawrence K.

    2011-02-01

    The Kelvin-Helmholtz instability is modelled for inviscid and viscous fluids. Here, two bounded fluid layers flow parallel to each other with the interface between them growing in an unstable fashion when subjected to a small perturbation. In the various configurations of this problem, and the related problem of the vortex sheet, there are several phenomena associated with the evolution of the interface; notably the formation of a finite time curvature singularity and the ‘roll-up' of the interface. Two contrasting computational schemes will be presented. A spectral method is used to follow the evolution of the interface in the inviscid version of the problem. This allows the interface shape to be computed up to the time that a curvature singularity forms, with several computational difficulties overcome to reach that point. A weakly compressible viscous version of the problem is studied using finite difference techniques and a vorticity-streamfunction formulation. The two versions have comparable, but not identical, initial conditions and so the results exhibit some differences in timing. By including a small amount of viscosity the interface may be followed to the point that it rolls up into a classic ‘cat's-eye' shape. Particular attention was given to computing a consistent initial condition and solving the continuity equation both accurately and efficiently.

  18. Suite of finite element algorithms for accurate computation of soft tissue deformation for surgical simulation

    PubMed Central

    Joldes, Grand Roman; Wittek, Adam; Miller, Karol

    2008-01-01

    Real time computation of soft tissue deformation is important for the use of augmented reality devices and for providing haptic feedback during operation or surgeon training. This requires algorithms that are fast, accurate and can handle material nonlinearities and large deformations. A set of such algorithms is presented in this paper, starting with the finite element formulation and the integration scheme used and addressing common problems such as hourglass control and locking. The computation examples presented prove that by using these algorithms, real time computations become possible without sacrificing the accuracy of the results. For a brain model having more than 7000 degrees of freedom, we computed the reaction forces due to indentation with frequency of around 1000 Hz using a standard dual core PC. Similarly, we conducted simulation of brain shift using a model with more than 50 000 degrees of freedom in less than a minute. The speed benefits of our models results from combining the Total Lagrangian formulation with explicit time integration and low order finite elements. PMID:19152791

  19. Accurate technique for complete geometric calibration of cone-beam computed tomography systems.

    PubMed

    Cho, Youngbin; Moseley, Douglas J; Siewerdsen, Jeffrey H; Jaffray, David A

    2005-04-01

    Cone-beam computed tomography systems have been developed to provide in situ imaging for the purpose of guiding radiation therapy. Clinical systems have been constructed using this approach, a clinical linear accelerator (Elekta Synergy RP) and an iso-centric C-arm. Geometric calibration involves the estimation of a set of parameters that describes the geometry of such systems, and is essential for accurate image reconstruction. We have developed a general analytic algorithm and corresponding calibration phantom for estimating these geometric parameters in cone-beam computed tomography (CT) systems. The performance of the calibration algorithm is evaluated and its application is discussed. The algorithm makes use of a calibration phantom to estimate the geometric parameters of the system. The phantom consists of 24 steel ball bearings (BBs) in a known geometry. Twelve BBs are spaced evenly at 30 deg in two plane-parallel circles separated by a given distance along the tube axis. The detector (e.g., a flat panel detector) is assumed to have no spatial distortion. The method estimates geometric parameters including the position of the x-ray source, position, and rotation of the detector, and gantry angle, and can describe complex source-detector trajectories. The accuracy and sensitivity of the calibration algorithm was analyzed. The calibration algorithm estimates geometric parameters in a high level of accuracy such that the quality of CT reconstruction is not degraded by the error of estimation. Sensitivity analysis shows uncertainty of 0.01 degrees (around beam direction) to 0.3 degrees (normal to the beam direction) in rotation, and 0.2 mm (orthogonal to the beam direction) to 4.9 mm (beam direction) in position for the medical linear accelerator geometry. Experimental measurements using a laboratory bench Cone-beam CT system of known geometry demonstrate the sensitivity of the method in detecting small changes in the imaging geometry with an uncertainty of 0

  20. Simple but accurate GCM-free approach for quantifying anthropogenic climate change

    NASA Astrophysics Data System (ADS)

    Lovejoy, S.

    2014-12-01

    We are so used to analysing the climate with the help of giant computer models (GCM's) that it is easy to get the impression that they are indispensable. Yet anthropogenic warming is so large (roughly 0.9oC) that it turns out that it is straightforward to quantify it with more empirically based methodologies that can be readily understood by the layperson. The key is to use the CO2 forcing as a linear surrogate for all the anthropogenic effects from 1880 to the present (implicitly including all effects due to Greenhouse Gases, aerosols and land use changes). To a good approximation, double the economic activity, double the effects. The relationship between the forcing and global mean temperature is extremely linear as can be seen graphically and understood without fancy statistics, [Lovejoy, 2014a] (see the attached figure and http://www.physics.mcgill.ca/~gang/Lovejoy.htm). To an excellent approximation, the deviations from the linear forcing - temperature relation can be interpreted as the natural variability. For example, this direct - yet accurate approach makes it graphically obvious that the "pause" or "hiatus" in the warming since 1998 is simply a natural cooling event that has roughly offset the anthropogenic warming [Lovejoy, 2014b]. Rather than trying to prove that the warming is anthropogenic, with a little extra work (and some nonlinear geophysics theory and pre-industrial multiproxies) we can disprove the competing theory that it is natural. This approach leads to the estimate that the probability of the industrial scale warming being a giant natural fluctuation is ≈0.1%: it can be dismissed. This destroys the last climate skeptic argument - that the models are wrong and the warming is natural. It finally allows for a closure of the debate. In this talk we argue that this new, direct, simple, intuitive approach provides an indispensable tool for communicating - and convincing - the public of both the reality and the amplitude of anthropogenic warming

  1. Computational approaches to motor control

    PubMed Central

    Flash, Tamar; Sejnowski, Terrence J

    2010-01-01

    New concepts and computational models that integrate behavioral and neurophysiological observations have addressed several of the most fundamental long-standing problems in motor control. These problems include the selection of particular trajectories among the large number of possibilities, the solution of inverse kinematics and dynamics problems, motor adaptation and the learning of sequential behaviors. PMID:11741014

  2. Research on the Rapid and Accurate Positioning and Orientation Approach for Land Missile-Launching Vehicle

    PubMed Central

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-01-01

    Getting a land vehicle’s accurate position, azimuth and attitude rapidly is significant for vehicle based weapons’ combat effectiveness. In this paper, a new approach to acquire vehicle’s accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle’s accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm’s iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system’s working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min. PMID:26492249

  3. Research on the rapid and accurate positioning and orientation approach for land missile-launching vehicle.

    PubMed

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-01-01

    Getting a land vehicle's accurate position, azimuth and attitude rapidly is significant for vehicle based weapons' combat effectiveness. In this paper, a new approach to acquire vehicle's accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle's accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm's iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system's working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min. PMID:26492249

  4. [The determinant role of an accurate medicosocial approach in the prognosis of pediatric blood diseases].

    PubMed

    Toppet, M

    2005-01-01

    The care of infancy and childhood blood diseases implies a comprehensive medicosocial approach. This is a prerequisite for regular follow-up, for satisfactory compliance to treatment and for optimal patient's quality of life. Different modalities of medicosocial approach have been developed in the pediatric department (firstly in the Hospital Saint Pierre and than in the Children's University Hospital HUDERF). The drastic importance of a recent reform of the increased family allowances is briefly presented. The author underlines the determinant role of an accurate global approach, in which the patient and the family are surrounded by a multidisciplinary team, including social workers. PMID:16454232

  5. Molecules-in-Molecules: An Extrapolated Fragment-Based Approach for Accurate Calculations on Large Molecules and Materials.

    PubMed

    Mayhall, Nicholas J; Raghavachari, Krishnan

    2011-05-10

    We present a new extrapolated fragment-based approach, termed molecules-in-molecules (MIM), for accurate energy calculations on large molecules. In this method, we use a multilevel partitioning approach coupled with electronic structure studies at multiple levels of theory to provide a hierarchical strategy for systematically improving the computed results. In particular, we use a generalized hybrid energy expression, similar in spirit to that in the popular ONIOM methodology, that can be combined easily with any fragmentation procedure. In the current work, we explore a MIM scheme which first partitions a molecule into nonoverlapping fragments and then recombines the interacting fragments to form overlapping subsystems. By including all interactions with a cheaper level of theory, the MIM approach is shown to significantly reduce the errors arising from a single level fragmentation procedure. We report the implementation of energies and gradients and the initial assessment of the MIM method using both biological and materials systems as test cases. PMID:26610128

  6. Optical computed tomography of radiochromic gels for accurate three-dimensional dosimetry

    NASA Astrophysics Data System (ADS)

    Babic, Steven

    In this thesis, three-dimensional (3-D) radiochromic Ferrous Xylenol-orange (FX) and Leuco Crystal Violet (LCV) micelles gels were imaged by laser and cone-beam (Vista(TM)) optical computed tomography (CT) scanners. The objective was to develop optical CT of radiochromic gels for accurate 3-D dosimetry of intensity-modulated radiation therapy (IMRT) and small field techniques used in modern radiotherapy. First, the cause of a threshold dose response in FX gel dosimeters when scanned with a yellow light source was determined. This effect stems from a spectral sensitivity to multiple chemical complexes that are at different dose levels between ferric ions and xylenol-orange. To negate the threshold dose, an initial concentration of ferric ions is needed in order to shift the chemical equilibrium so that additional dose results in a linear production of a coloured complex that preferentially absorbs at longer wavelengths. Second, a low diffusion leuco-based radiochromic gel consisting of Triton X-100 micelles was developed. The diffusion coefficient of the LCV micelle gel was found to be minimal (0.036 + 0.001 mm2 hr-1 ). Although a dosimetric characterization revealed a reduced sensitivity to radiation, this was offset by a lower auto-oxidation rate and base optical density, higher melting point and no spectral sensitivity. Third, the Radiological Physics Centre (RPC) head-and-neck IMRT protocol was extended to 3-D dose verification using laser and cone-beam (Vista(TM)) optical CT scans of FX gels. Both optical systems yielded comparable measured dose distributions in high-dose regions and low gradients. The FX gel dosimetry results were crossed checked against independent thermoluminescent dosimeter and GAFChromicRTM EBT film measurements made by the RPC. It was shown that optical CT scanned FX gels can be used for accurate IMRT dose verification in 3-D. Finally, corrections for FX gel diffusion and scattered stray light in the Vista(TM) scanner were developed to

  7. Computational approaches for systems metabolomics.

    PubMed

    Krumsiek, Jan; Bartel, Jörg; Theis, Fabian J

    2016-06-01

    Systems genetics is defined as the simultaneous assessment and analysis of multi-omics datasets. In the past few years, metabolomics has been established as a robust tool describing an important functional layer in this approach. The metabolome of a biological system represents an integrated state of genetic and environmental factors and has been referred to as a 'link between genotype and phenotype'. In this review, we summarize recent progresses in statistical analysis methods for metabolomics data in combination with other omics layers. We put a special focus on complex, multivariate statistical approaches as well as pathway-based and network-based analysis methods. Moreover, we outline current challenges and pitfalls of metabolomics-focused multi-omics analyses and discuss future steps for the field. PMID:27135552

  8. Application of the accurate mass and time tag approach in studies of the human blood lipidome

    SciTech Connect

    Ding, Jie; Sorensen, Christina M.; Jaitly, Navdeep; Jiang, Hongliang; Orton, Daniel J.; Monroe, Matthew E.; Moore, Ronald J.; Smith, Richard D.; Metz, Thomas O.

    2008-08-15

    We report a preliminary demonstration of the accurate mass and time (AMT) tag approach for lipidomics. Initial data-dependent LC-MS/MS analyses of human plasma, erythrocyte, and lymphocyte lipids were performed in order to identify lipid molecular species in conjunction with complementary accurate mass and isotopic distribution information. Identified lipids were used to populate initial lipid AMT tag databases containing 250 and 45 entries for those species detected in positive and negative electrospray ionization (ESI) modes, respectively. The positive ESI database was then utilized to identify human plasma, erythrocyte, and lymphocyte lipids in high-throughput quantitative LC-MS analyses based on the AMT tag approach. We were able to define the lipid profiles of human plasma, erythrocytes, and lymphocytes based on qualitative and quantitative differences in lipid abundance. In addition, we also report on the optimization of a reversed-phase LC method for the separation of lipids in these sample types.

  9. Accurate definition of brain regions position through the functional landmark approach.

    PubMed

    Thirion, Bertrand; Varoquaux, Gaël; Poline, Jean-Baptiste

    2010-01-01

    In many application of functional Magnetic Resonance Imaging (fMRI), including clinical or pharmacological studies, the definition of the location of the functional activity between subjects is crucial. While current acquisition and normalization procedures improve the accuracy of the functional signal localization, it is also important to ensure that functional foci detection yields accurate results, and reflects between-subject variability. Here we introduce a fast functional landmark detection procedure, that explicitly models the spatial variability of activation foci in the observed population. We compare this detection approach to standard statistical maps peak extraction procedures: we show that it yields more accurate results on simulations, and more reproducible results on a large cohort of subjects. These results demonstrate that explicit functional landmark modeling approaches are more effective than standard statistical mapping for brain functional focus detection. PMID:20879321

  10. Finding accurate frontiers: A knowledge-intensive approach to relational learning

    NASA Technical Reports Server (NTRS)

    Pazzani, Michael; Brunk, Clifford

    1994-01-01

    An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory.

  11. Enabling high grayscale resolution displays and accurate response time measurements on conventional computers.

    PubMed

    Li, Xiangrui; Lu, Zhong-Lin

    2012-01-01

    Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect

  12. When do perturbative approaches accurately capture the dynamics of complex quantum systems?

    PubMed Central

    Fruchtman, Amir; Lambert, Neill; Gauger, Erik M.

    2016-01-01

    Understanding the dynamics of higher-dimensional quantum systems embedded in a complex environment remains a significant theoretical challenge. While several approaches yielding numerically converged solutions exist, these are computationally expensive and often provide only limited physical insight. Here we address the question: when do more intuitive and simpler-to-compute second-order perturbative approaches provide adequate accuracy? We develop a simple analytical criterion and verify its validity for the case of the much-studied FMO dynamics as well as the canonical spin-boson model. PMID:27335176

  13. Time-Accurate Computation of Viscous Flow Around Deforming Bodies Using Overset Grids

    SciTech Connect

    Fast, P; Henshaw, W D

    2001-04-02

    Dynamically evolving boundaries and deforming bodies interacting with a flow are commonly encountered in fluid dynamics. However, the numerical simulation of flows with dynamic boundaries is difficult with current methods. We propose a new method for studying such problems. The key idea is to use the overset grid method with a thin, body-fitted grid near the deforming boundary, while using fixed Cartesian grids to cover most of the computational domain. Our approach combines the strengths of earlier moving overset grid methods for rigid body motion, and unstructured grid methods for Aow-structure interactions. Large scale deformation of the flow boundaries can be handled without a global regridding, and in a computationally efficient way. In terms of computational cost, even a full overset grid regridding is significantly cheaper than a full regridding of an unstructured grid for the same domain, especially in three dimensions. Numerical studies are used to verify accuracy and convergence of our flow solver. As a computational example, we consider two-dimensional incompressible flow past a flexible filament with prescribed dynamics.

  14. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    DOE PAGESBeta

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less

  15. Quantitative approaches to computational vaccinology.

    PubMed

    Doytchinova, Irini A; Flower, Darren R

    2002-06-01

    This article reviews the newly released JenPep database and two new powerful techniques for T-cell epitope prediction: (i) the additive method; and (ii) a 3D-Quantitative Structure Activity Relationships (3D-QSAR) method, based on Comparative Molecular Similarity Indices Analysis (CoMSIA). The JenPep database is a family of relational databases supporting the growing need of immunoinformaticians for quantitative data on peptide binding to major histocompatibility complexes and to the Transporters associated with Antigen Processing (TAP). It also contains an annotated list of T-cell epitopes. The database is available free via the Internet (http://www.jenner.ac.uk/JenPep). The additive prediction method is based on the assumption that the binding affinity of a peptide depends on the contributions from each amino acid as well as on the interactions between the adjacent and every second side-chain. In the 3D-QSAR approach, the influence of five physicochemical properties (steric bulk, electrostatic potential, local hydrophobicity, hydrogen-bond donor and hydrogen-bond acceptor abilities) on the affinity of peptides binding to MHC molecules were considered. Both methods were exemplified through their application to the well-studied problem of peptides binding to the human class I MHC molecule HLA-A*0201. PMID:12067414

  16. A fast and accurate method for computing the Sunyaev-Zel'dovich signal of hot galaxy clusters

    NASA Astrophysics Data System (ADS)

    Chluba, Jens; Nagai, Daisuke; Sazonov, Sergey; Nelson, Kaylea

    2012-10-01

    New-generation ground- and space-based cosmic microwave background experiments have ushered in discoveries of massive galaxy clusters via the Sunyaev-Zel'dovich (SZ) effect, providing a new window for studying cluster astrophysics and cosmology. Many of the newly discovered, SZ-selected clusters contain hot intracluster plasma (kTe ≳ 10 keV) and exhibit disturbed morphology, indicative of frequent mergers with large peculiar velocity (v ≳ 1000 km s-1). It is well known that for the interpretation of the SZ signal from hot, moving galaxy clusters, relativistic corrections must be taken into account, and in this work, we present a fast and accurate method for computing these effects. Our approach is based on an alternative derivation of the Boltzmann collision term which provides new physical insight into the sources of different kinematic corrections in the scattering problem. In contrast to previous works, this allows us to obtain a clean separation of kinematic and scattering terms. We also briefly mention additional complications connected with kinematic effects that should be considered when interpreting future SZ data for individual clusters. One of the main outcomes of this work is SZPACK, a numerical library which allows very fast and precise (≲0.001 per cent at frequencies hν ≲ 20kTγ) computation of the SZ signals up to high electron temperature (kTe ≃ 25 keV) and large peculiar velocity (v/c ≃ 0.01). The accuracy is well beyond the current and future precision of SZ observations and practically eliminates uncertainties which are usually overcome with more expensive numerical evaluation of the Boltzmann collision term. Our new approach should therefore be useful for analysing future high-resolution, multifrequency SZ observations as well as computing the predicted SZ effect signals from numerical simulations.

  17. Toward exascale computing through neuromorphic approaches.

    SciTech Connect

    James, Conrad D.

    2010-09-01

    While individual neurons function at relatively low firing rates, naturally-occurring nervous systems not only surpass manmade systems in computing power, but accomplish this feat using relatively little energy. It is asserted that the next major breakthrough in computing power will be achieved through application of neuromorphic approaches that mimic the mechanisms by which neural systems integrate and store massive quantities of data for real-time decision making. The proposed LDRD provides a conceptual foundation for SNL to make unique advances toward exascale computing. First, a team consisting of experts from the HPC, MESA, cognitive and biological sciences and nanotechnology domains will be coordinated to conduct an exercise with the outcome being a concept for applying neuromorphic computing to achieve exascale computing. It is anticipated that this concept will involve innovative extension and integration of SNL capabilities in MicroFab, material sciences, high-performance computing, and modeling and simulation of neural processes/systems.

  18. Fast and accurate computation of two-dimensional non-separable quadratic-phase integrals.

    PubMed

    Koç, Aykut; Ozaktas, Haldun M; Hesselink, Lambertus

    2010-06-01

    We report a fast and accurate algorithm for numerical computation of two-dimensional non-separable linear canonical transforms (2D-NS-LCTs). Also known as quadratic-phase integrals, this class of integral transforms represents a broad class of optical systems including Fresnel propagation in free space, propagation in graded-index media, passage through thin lenses, and arbitrary concatenations of any number of these, including anamorphic/astigmatic/non-orthogonal cases. The general two-dimensional non-separable case poses several challenges which do not exist in the one-dimensional case and the separable two-dimensional case. The algorithm takes approximately N log N time, where N is the two-dimensional space-bandwidth product of the signal. Our method properly tracks and controls the space-bandwidth products in two dimensions, in order to achieve information theoretically sufficient, but not wastefully redundant, sampling required for the reconstruction of the underlying continuous functions at any stage of the algorithm. Additionally, we provide an alternative definition of general 2D-NS-LCTs that shows its kernel explicitly in terms of its ten parameters, and relate these parameters bidirectionally to conventional ABCD matrix parameters. PMID:20508697

  19. Accurate computation and interpretation of spin-dependent properties in metalloproteins

    NASA Astrophysics Data System (ADS)

    Rodriguez, Jorge

    2006-03-01

    Nature uses the properties of open-shell transition metal ions to carry out a variety of functions associated with vital life processes. Mononuclear and binuclear iron centers, in particular, are intriguing structural motifs present in many heme and non-heme proteins. Hemerythrin and methane monooxigenase, for example, are members of the latter class whose diiron active sites display magnetic ordering. We have developed a computational protocol based on spin density functional theory (SDFT) to accurately predict physico-chemical parameters of metal sites in proteins and bioinorganic complexes which traditionally had only been determined from experiment. We have used this new methodology to perform a comprehensive study of the electronic structure and magnetic properties of heme and non-heme iron proteins and related model compounds. We have been able to predict with a high degree of accuracy spectroscopic (Mössbauer, EPR, UV-vis, Raman) and magnetization parameters of iron proteins and, at the same time, gained unprecedented microscopic understanding of their physico-chemical properties. Our results have allowed us to establish important correlations between the electronic structure, geometry, spectroscopic data, and biochemical function of heme and non- heme iron proteins.

  20. Accurate computation of surface stresses and forces with immersed boundary methods

    NASA Astrophysics Data System (ADS)

    Goza, Andres; Liska, Sebastian; Morley, Benjamin; Colonius, Tim

    2016-09-01

    Many immersed boundary methods solve for surface stresses that impose the velocity boundary conditions on an immersed body. These surface stresses may contain spurious oscillations that make them ill-suited for representing the physical surface stresses on the body. Moreover, these inaccurate stresses often lead to unphysical oscillations in the history of integrated surface forces such as the coefficient of lift. While the errors in the surface stresses and forces do not necessarily affect the convergence of the velocity field, it is desirable, especially in fluid-structure interaction problems, to obtain smooth and convergent stress distributions on the surface. To this end, we show that the equation for the surface stresses is an integral equation of the first kind whose ill-posedness is the source of spurious oscillations in the stresses. We also demonstrate that for sufficiently smooth delta functions, the oscillations may be filtered out to obtain physically accurate surface stresses. The filtering is applied as a post-processing procedure, so that the convergence of the velocity field is unaffected. We demonstrate the efficacy of the method by computing stresses and forces that converge to the physical stresses and forces for several test problems.

  1. Accurate vibrational spectra via molecular tailoring approach: A case study of water clusters at MP2 level

    NASA Astrophysics Data System (ADS)

    Sahu, Nityananda; Gadre, Shridhar R.

    2015-01-01

    In spite of the recent advents in parallel algorithms and computer hardware, high-level calculation of vibrational spectra of large molecules is still an uphill task. To overcome this, significant effort has been devoted to the development of new algorithms based on fragmentation methods. The present work provides the details of an efficient and accurate procedure for computing the vibrational spectra of large clusters employing molecular tailoring approach (MTA). The errors in the Hessian matrix elements and dipole derivatives arising due to the approximation nature of MTA are reduced by grafting the corrections from a smaller basis set. The algorithm has been tested out for obtaining vibrational spectra of neutral and charged water clusters at Møller-Plesset second order level of theory, and benchmarking them against the respective full calculation (FC) and/or experimental results. For (H2O)16 clusters, the estimated vibrational frequencies are found to differ by a maximum of 2 cm-1 with reference to the corresponding FC values. Unlike the FC, the MTA-based calculations including grafting procedure can be performed on a limited hardware, yet take a fraction of the FC time. The present methodology, thus, opens a possibility of the accurate estimation of the vibrational spectra of large molecular systems, which is otherwise impossible or formidable.

  2. Accurate vibrational spectra via molecular tailoring approach: a case study of water clusters at MP2 level.

    PubMed

    Sahu, Nityananda; Gadre, Shridhar R

    2015-01-01

    In spite of the recent advents in parallel algorithms and computer hardware, high-level calculation of vibrational spectra of large molecules is still an uphill task. To overcome this, significant effort has been devoted to the development of new algorithms based on fragmentation methods. The present work provides the details of an efficient and accurate procedure for computing the vibrational spectra of large clusters employing molecular tailoring approach (MTA). The errors in the Hessian matrix elements and dipole derivatives arising due to the approximation nature of MTA are reduced by grafting the corrections from a smaller basis set. The algorithm has been tested out for obtaining vibrational spectra of neutral and charged water clusters at Møller-Plesset second order level of theory, and benchmarking them against the respective full calculation (FC) and/or experimental results. For (H2O)16 clusters, the estimated vibrational frequencies are found to differ by a maximum of 2 cm(-1) with reference to the corresponding FC values. Unlike the FC, the MTA-based calculations including grafting procedure can be performed on a limited hardware, yet take a fraction of the FC time. The present methodology, thus, opens a possibility of the accurate estimation of the vibrational spectra of large molecular systems, which is otherwise impossible or formidable. PMID:25573553

  3. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    SciTech Connect

    Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang E-mail: jing.xiong@siat.ac.cn; Hu, Ying; Xiong, Jing E-mail: jing.xiong@siat.ac.cn; Zhang, Jianwei

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  4. A novel fast and accurate pseudo-analytical simulation approach for MOAO

    NASA Astrophysics Data System (ADS)

    Gendron, É.; Charara, A.; Abdelfattah, A.; Gratadour, D.; Keyes, D.; Ltaief, H.; Morel, C.; Vidal, F.; Sevin, A.; Rousset, G.

    2014-08-01

    Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique for wide-field multi-object spectrographs (MOS). MOAO aims at applying dedicated wavefront corrections to numerous separated tiny patches spread over a large field of view (FOV), limited only by that of the telescope. The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. We have developed a novel hybrid, pseudo-analytical simulation scheme, somewhere in between the end-to- end and purely analytical approaches, that allows us to simulate in detail the tomographic problem as well as noise and aliasing with a high fidelity, and including fitting and bandwidth errors thanks to a Fourier-based code. Our tomographic approach is based on the computation of the minimum mean square error (MMSE) reconstructor, from which we derive numerically the covariance matrix of the tomographic error, including aliasing and propagated noise. We are then able to simulate the point-spread function (PSF) associated to this covariance matrix of the residuals, like in PSF reconstruction algorithms. The advantage of our approach is that we compute the same tomographic reconstructor that would be computed when operating the real instrument, so that our developments open the way for a future on-sky implementation of the tomographic control, plus the joint PSF and performance estimation. The main challenge resides in the computation of the tomographic reconstructor which involves the inversion of a large matrix (typically 40 000 × 40 000 elements). To perform this computation efficiently, we chose an optimized approach based on the use of GPUs as accelerators and using an optimized linear algebra library: MORSE providing a significant speedup against standard CPU oriented libraries such as Intel MKL. Because the covariance matrix is

  5. Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates

    NASA Astrophysics Data System (ADS)

    Carbogno, Christian; Scheffler, Matthias

    In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.

  6. Stable, accurate and efficient computation of normal modes for horizontal stratified models

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Chen, Xiaofei

    2016-08-01

    We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of `family of secular functions' that we herein call `adaptive mode observers' is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of `turning point', our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.

  7. Stable, accurate and efficient computation of normal modes for horizontal stratified models

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Chen, Xiaofei

    2016-06-01

    We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of "family of secular functions" that we herein call "adaptive mode observers", is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of "turning point", our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.

  8. Computational Approaches to Study Microbes and Microbiomes

    PubMed Central

    Greene, Casey S.; Foster, James A.; Stanton, Bruce A.; Hogan, Deborah A.; Bromberg, Yana

    2016-01-01

    Technological advances are making large-scale measurements of microbial communities commonplace. These newly acquired datasets are allowing researchers to ask and answer questions about the composition of microbial communities, the roles of members in these communities, and how genes and molecular pathways are regulated in individual community members and communities as a whole to effectively respond to diverse and changing environments. In addition to providing a more comprehensive survey of the microbial world, this new information allows for the development of computational approaches to model the processes underlying microbial systems. We anticipate that the field of computational microbiology will continue to grow rapidly in the coming years. In this manuscript we highlight both areas of particular interest in microbiology as well as computational approaches that begin to address these challenges. PMID:26776218

  9. A hierarchical approach to accurate predictions of macroscopic thermodynamic behavior from quantum mechanics and molecular simulations

    NASA Astrophysics Data System (ADS)

    Garrison, Stephen L.

    2005-07-01

    The combination of molecular simulations and potentials obtained from quantum chemistry is shown to be able to provide reasonably accurate thermodynamic property predictions. Gibbs ensemble Monte Carlo simulations are used to understand the effects of small perturbations to various regions of the model Lennard-Jones 12-6 potential. However, when the phase behavior and second virial coefficient are scaled by the critical properties calculated for each potential, the results obey a corresponding states relation suggesting a non-uniqueness problem for interaction potentials fit to experimental phase behavior. Several variations of a procedure collectively referred to as quantum mechanical Hybrid Methods for Interaction Energies (HM-IE) are developed and used to accurately estimate interaction energies from CCSD(T) calculations with a large basis set in a computationally efficient manner for the neon-neon, acetylene-acetylene, and nitrogen-benzene systems. Using these results and methods, an ab initio, pairwise-additive, site-site potential for acetylene is determined and then improved using results from molecular simulations using this initial potential. The initial simulation results also indicate that a limited range of energies important for accurate phase behavior predictions. Second virial coefficients calculated from the improved potential indicate that one set of experimental data in the literature is likely erroneous. This prescription is then applied to methanethiol. Difficulties in modeling the effects of the lone pair electrons suggest that charges on the lone pair sites negatively impact the ability of the intermolecular potential to describe certain orientations, but that the lone pair sites may be necessary to reasonably duplicate the interaction energies for several orientations. Two possible methods for incorporating the effects of three-body interactions into simulations within the pairwise-additivity formulation are also developed. A low density

  10. Computational Chemical Imaging for Cardiovascular Pathology: Chemical Microscopic Imaging Accurately Determines Cardiac Transplant Rejection

    PubMed Central

    Tiwari, Saumya; Reddy, Vijaya B.; Bhargava, Rohit; Raman, Jaishankar

    2015-01-01

    Rejection is a common problem after cardiac transplants leading to significant number of adverse events and deaths, particularly in the first year of transplantation. The gold standard to identify rejection is endomyocardial biopsy. This technique is complex, cumbersome and requires a lot of expertise in the correct interpretation of stained biopsy sections. Traditional histopathology cannot be used actively or quickly during cardiac interventions or surgery. Our objective was to develop a stain-less approach using an emerging technology, Fourier transform infrared (FT-IR) spectroscopic imaging to identify different components of cardiac tissue by their chemical and molecular basis aided by computer recognition, rather than by visual examination using optical microscopy. We studied this technique in assessment of cardiac transplant rejection to evaluate efficacy in an example of complex cardiovascular pathology. We recorded data from human cardiac transplant patients’ biopsies, used a Bayesian classification protocol and developed a visualization scheme to observe chemical differences without the need of stains or human supervision. Using receiver operating characteristic curves, we observed probabilities of detection greater than 95% for four out of five histological classes at 10% probability of false alarm at the cellular level while correctly identifying samples with the hallmarks of the immune response in all cases. The efficacy of manual examination can be significantly increased by observing the inherent biochemical changes in tissues, which enables us to achieve greater diagnostic confidence in an automated, label-free manner. We developed a computational pathology system that gives high contrast images and seems superior to traditional staining procedures. This study is a prelude to the development of real time in situ imaging systems, which can assist interventionists and surgeons actively during procedures. PMID:25932912

  11. Effective and accurate approach for modeling of commensurate-incommensurate transition in krypton monolayer on graphite.

    PubMed

    Ustinov, E A

    2014-10-01

    Commensurate-incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs-Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton-graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton-carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas-solid and solid-solid system. PMID:25296827

  12. Effective and accurate approach for modeling of commensurate–incommensurate transition in krypton monolayer on graphite

    SciTech Connect

    Ustinov, E. A.

    2014-10-07

    Commensurate–incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs–Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton–graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton–carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas–solid and solid–solid system.

  13. Accurate radiocarbon age estimation using "early" measurements: a new approach to reconstructing the Paleolithic absolute chronology

    NASA Astrophysics Data System (ADS)

    Omori, Takayuki; Sano, Katsuhiro; Yoneda, Minoru

    2014-05-01

    This paper presents new correction approaches for "early" radiocarbon ages to reconstruct the Paleolithic absolute chronology. In order to discuss time-space distribution about the replacement of archaic humans, including Neanderthals in Europe, by the modern humans, a massive data, which covers a wide-area, would be needed. Today, some radiocarbon databases focused on the Paleolithic have been published and used for chronological studies. From a viewpoint of current analytical technology, however, the any database have unreliable results that make interpretation of radiocarbon dates difficult. Most of these unreliable ages had been published in the early days of radiocarbon analysis. In recent years, new analytical methods to determine highly-accurate dates have been developed. Ultrafiltration and ABOx-SC methods, as new sample pretreatments for bone and charcoal respectively, have attracted attention because they could remove imperceptible contaminates and derive reliable accurately ages. In order to evaluate the reliability of "early" data, we investigated the differences and variabilities of radiocarbon ages on different pretreatments, and attempted to develop correction functions for the assessment of the reliability. It can be expected that reliability of the corrected age is increased and the age applied to chronological research together with recent ages. Here, we introduce the methodological frameworks and archaeological applications.

  14. Procedure for computer-controlled milling of accurate surfaces of revolution for millimeter and far-infrared mirrors

    NASA Technical Reports Server (NTRS)

    Emmons, Louisa; De Zafra, Robert

    1991-01-01

    A simple method for milling accurate off-axis parabolic mirrors with a computer-controlled milling machine is discussed. For machines with a built-in circle-cutting routine, an exact paraboloid can be milled with few computer commands and without the use of the spherical or linear approximations. The proposed method can be adapted easily to cut off-axis sections of elliptical or spherical mirrors.

  15. A Maximum-Entropy approach for accurate document annotation in the biomedical domain

    PubMed Central

    2012-01-01

    The increasing number of scientific literature on the Web and the absence of efficient tools used for classifying and searching the documents are the two most important factors that influence the speed of the search and the quality of the results. Previous studies have shown that the usage of ontologies makes it possible to process document and query information at the semantic level, which greatly improves the search for the relevant information and makes one step further towards the Semantic Web. A fundamental step in these approaches is the annotation of documents with ontology concepts, which can also be seen as a classification task. In this paper we address this issue for the biomedical domain and present a new automated and robust method, based on a Maximum Entropy approach, for annotating biomedical literature documents with terms from the Medical Subject Headings (MeSH). The experimental evaluation shows that the suggested Maximum Entropy approach for annotating biomedical documents with MeSH terms is highly accurate, robust to the ambiguity of terms, and can provide very good performance even when a very small number of training documents is used. More precisely, we show that the proposed algorithm obtained an average F-measure of 92.4% (precision 99.41%, recall 86.77%) for the full range of the explored terms (4,078 MeSH terms), and that the algorithm’s performance is resilient to terms’ ambiguity, achieving an average F-measure of 92.42% (precision 99.32%, recall 86.87%) in the explored MeSH terms which were found to be ambiguous according to the Unified Medical Language System (UMLS) thesaurus. Finally, we compared the results of the suggested methodology with a Naive Bayes and a Decision Trees classification approach, and we show that the Maximum Entropy based approach performed with higher F-Measure in both ambiguous and monosemous MeSH terms. PMID:22541593

  16. Accurate characterization of mask defects by combination of phase retrieval and deterministic approach

    NASA Astrophysics Data System (ADS)

    Park, Min-Chul; Leportier, Thibault; Kim, Wooshik; Song, Jindong

    2016-06-01

    In this paper, we present a method to characterize not only shape but also depth of defects in line and space mask patterns. Features in a mask are too fine for conventional imaging system to resolve them and coherent imaging system providing only the pattern diffracted by the mask are used. Then, phase retrieval methods may be applied, but the accuracy it too low to determine the exact shape of the defect. Deterministic methods have been proposed to characterize accurately the defect, but it requires a reference pattern. We propose to use successively phase retrieval algorithm to retrieve the general shape of the mask and then deterministic approach to characterize precisely the defects detected.

  17. Computational Approach for Developing Blood Pump

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan

    2002-01-01

    This viewgraph presentation provides an overview of the computational approach to developing a ventricular assist device (VAD) which utilizes NASA aerospace technology. The VAD is used as a temporary support to sick ventricles for those who suffer from late stage congestive heart failure (CHF). The need for donor hearts is much greater than their availability, and the VAD is seen as a bridge-to-transplant. The computational issues confronting the design of a more advanced, reliable VAD include the modelling of viscous incompressible flow. A computational approach provides the possibility of quantifying the flow characteristics, which is especially valuable for analyzing compact design with highly sensitive operating conditions. Computational fluid dynamics (CFD) and rocket engine technology has been applied to modify the design of a VAD which enabled human transplantation. The computing requirement for this project is still large, however, and the unsteady analysis of the entire system from natural heart to aorta involves several hundred revolutions of the impeller. Further study is needed to assess the impact of mechanical VADs on the human body

  18. Approach to constructing reconfigurable computer vision system

    NASA Astrophysics Data System (ADS)

    Xue, Jianru; Zheng, Nanning; Wang, Xiaoling; Zhang, Yongping

    2000-10-01

    In this paper, we propose an approach to constructing reconfigurable vision system. We found that timely and efficient execution of early tasks can significantly enhance the performance of whole computer vision tasks, and abstract out a set of basic, computationally intensive stream operations that may be performed in parallel and embodies them in a series of specific front-end processors. These processors which based on FPGAs (Field programmable gate arrays) can be re-programmable to permit a range of different types of feature maps, such as edge detection & linking, image filtering. Front-end processors and a powerful DSP constitute a computing platform which can perform many CV tasks. Additionally we adopt the focus-of-attention technologies to reduce the I/O and computational demands by performing early vision processing only within a particular region of interest. Then we implement a multi-page, dual-ported image memory interface between the image input and computing platform (including front-end processors, DSP). Early vision features were loaded into banks of dual-ported image memory arrays, which are continually raster scan updated at high speed from the input image or video data stream. Moreover, the computing platform can be complete asynchronous, random access to the image data or any other early vision feature maps through the dual-ported memory banks. In this way, the computing platform resources can be properly allocated to a region of interest and decoupled from the task of dealing with a high speed serial raster scan input. Finally, we choose PCI Bus as the main channel between the PC and computing platform. Consequently, front-end processors' control registers and DSP's program memory were mapped into the PC's memory space, which provides user access to reconfigure the system at any time. We also present test result of a computer vision application based on the system.

  19. Accurate and interpretable nanoSAR models from genetic programming-based decision tree construction approaches.

    PubMed

    Oksel, Ceyda; Winkler, David A; Ma, Cai Y; Wilkins, Terry; Wang, Xue Z

    2016-09-01

    The number of engineered nanomaterials (ENMs) being exploited commercially is growing rapidly, due to the novel properties they exhibit. Clearly, it is important to understand and minimize any risks to health or the environment posed by the presence of ENMs. Data-driven models that decode the relationships between the biological activities of ENMs and their physicochemical characteristics provide an attractive means of maximizing the value of scarce and expensive experimental data. Although such structure-activity relationship (SAR) methods have become very useful tools for modelling nanotoxicity endpoints (nanoSAR), they have limited robustness and predictivity and, most importantly, interpretation of the models they generate is often very difficult. New computational modelling tools or new ways of using existing tools are required to model the relatively sparse and sometimes lower quality data on the biological effects of ENMs. The most commonly used SAR modelling methods work best with large datasets, are not particularly good at feature selection, can be relatively opaque to interpretation, and may not account for nonlinearity in the structure-property relationships. To overcome these limitations, we describe the application of a novel algorithm, a genetic programming-based decision tree construction tool (GPTree) to nanoSAR modelling. We demonstrate the use of GPTree in the construction of accurate and interpretable nanoSAR models by applying it to four diverse literature datasets. We describe the algorithm and compare model results across the four studies. We show that GPTree generates models with accuracies equivalent to or superior to those of prior modelling studies on the same datasets. GPTree is a robust, automatic method for generation of accurate nanoSAR models with important advantages that it works with small datasets, automatically selects descriptors, and provides significantly improved interpretability of models. PMID:26956430

  20. A Novel PCR-Based Approach for Accurate Identification of Vibrio parahaemolyticus

    PubMed Central

    Li, Ruichao; Chiou, Jiachi; Chan, Edward Wai-Chi; Chen, Sheng

    2016-01-01

    A PCR-based assay was developed for more accurate identification of Vibrio parahaemolyticus through targeting the blaCARB-17 like element, an intrinsic β-lactamase gene that may also be regarded as a novel species-specific genetic marker of this organism. Homologous analysis showed that blaCARB-17 like genes were more conservative than the tlh, toxR and atpA genes, the genetic markers commonly used as detection targets in identification of V. parahaemolyticus. Our data showed that this blaCARB-17-specific PCR-based detection approach consistently achieved 100% specificity, whereas PCR targeting the tlh and atpA genes occasionally produced false positive results. Furthermore, a positive result of this test is consistently associated with an intrinsic ampicillin resistance phenotype of the test organism, presumably conferred by the products of blaCARB-17 like genes. We envision that combined analysis of the unique genetic and phenotypic characteristics conferred by blaCARB-17 shall further enhance the detection specificity of this novel yet easy-to-use detection approach to a level superior to the conventional methods used in V. parahaemolyticus detection and identification. PMID:26858713

  1. A Novel PCR-Based Approach for Accurate Identification of Vibrio parahaemolyticus.

    PubMed

    Li, Ruichao; Chiou, Jiachi; Chan, Edward Wai-Chi; Chen, Sheng

    2016-01-01

    A PCR-based assay was developed for more accurate identification of Vibrio parahaemolyticus through targeting the bla CARB-17 like element, an intrinsic β-lactamase gene that may also be regarded as a novel species-specific genetic marker of this organism. Homologous analysis showed that bla CARB-17 like genes were more conservative than the tlh, toxR and atpA genes, the genetic markers commonly used as detection targets in identification of V. parahaemolyticus. Our data showed that this bla CARB-17-specific PCR-based detection approach consistently achieved 100% specificity, whereas PCR targeting the tlh and atpA genes occasionally produced false positive results. Furthermore, a positive result of this test is consistently associated with an intrinsic ampicillin resistance phenotype of the test organism, presumably conferred by the products of bla CARB-17 like genes. We envision that combined analysis of the unique genetic and phenotypic characteristics conferred by bla CARB-17 shall further enhance the detection specificity of this novel yet easy-to-use detection approach to a level superior to the conventional methods used in V. parahaemolyticus detection and identification. PMID:26858713

  2. A multi-objective optimization approach accurately resolves protein domain architectures

    PubMed Central

    Bernardes, J.S.; Vieira, F.R.J.; Zaverucha, G.; Carbone, A.

    2016-01-01

    Motivation: Given a protein sequence and a number of potential domains matching it, what are the domain content and the most likely domain architecture for the sequence? This problem is of fundamental importance in protein annotation, constituting one of the main steps of all predictive annotation strategies. On the other hand, when potential domains are several and in conflict because of overlapping domain boundaries, finding a solution for the problem might become difficult. An accurate prediction of the domain architecture of a multi-domain protein provides important information for function prediction, comparative genomics and molecular evolution. Results: We developed DAMA (Domain Annotation by a Multi-objective Approach), a novel approach that identifies architectures through a multi-objective optimization algorithm combining scores of domain matches, previously observed multi-domain co-occurrence and domain overlapping. DAMA has been validated on a known benchmark dataset based on CATH structural domain assignments and on the set of Plasmodium falciparum proteins. When compared with existing tools on both datasets, it outperforms all of them. Availability and implementation: DAMA software is implemented in C++ and the source code can be found at http://www.lcqb.upmc.fr/DAMA. Contact: juliana.silva_bernardes@upmc.fr or alessandra.carbone@lip6.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26458889

  3. Accurate calculation of conformational free energy differences in explicit water: the confinement-solvation free energy approach.

    PubMed

    Esque, Jeremy; Cecchini, Marco

    2015-04-23

    The calculation of the free energy of conformation is key to understanding the function of biomolecules and has attracted significant interest in recent years. Here, we present an improvement of the confinement method that was designed for use in the context of explicit solvent MD simulations. The development involves an additional step in which the solvation free energy of the harmonically restrained conformers is accurately determined by multistage free energy perturbation simulations. As a test-case application, the newly introduced confinement/solvation free energy (CSF) approach was used to compute differences in free energy between conformers of the alanine dipeptide in explicit water. The results are in excellent agreement with reference calculations based on both converged molecular dynamics and umbrella sampling. To illustrate the general applicability of the method, conformational equilibria of met-enkephalin (5 aa) and deca-alanine (10 aa) in solution were also analyzed. In both cases, smoothly converged free-energy results were obtained in agreement with equilibrium sampling or literature calculations. These results demonstrate that the CSF method may provide conformational free-energy differences of biomolecules with small statistical errors (below 0.5 kcal/mol) and at a moderate computational cost even with a full representation of the solvent. PMID:25807150

  4. Digital test signal generation: An accurate SNR calibration approach for the DSN

    NASA Technical Reports Server (NTRS)

    Gutierrez-Luaces, Benito O.

    1993-01-01

    In support of the on-going automation of the Deep Space Network (DSN) a new method of generating analog test signals with accurate signal-to-noise ratio (SNR) is described. High accuracy is obtained by simultaneous generation of digital noise and signal spectra at the desired bandwidth (base-band or bandpass). The digital synthesis provides a test signal embedded in noise with the statistical properties of a stationary random process. Accuracy is dependent on test integration time and limited only by the system quantization noise (0.02 dB). The monitor and control as well as signal-processing programs reside in a personal computer (PC). Commands are transmitted to properly configure the specially designed high-speed digital hardware. The prototype can generate either two data channels modulated or not on a subcarrier, or one QPSK channel, or a residual carrier with one biphase data channel. The analog spectrum generated is on the DC to 10 MHz frequency range. These spectra may be up-converted to any desired frequency without loss on the characteristics of the SNR provided. Test results are presented.

  5. Computational Approaches to Nucleic Acid Origami.

    PubMed

    Jabbari, Hosna; Aminpour, Maral; Montemagno, Carlo

    2015-10-12

    Recent advances in experimental DNA origami have dramatically expanded the horizon of DNA nanotechnology. Complex 3D suprastructures have been designed and developed using DNA origami with applications in biomaterial science, nanomedicine, nanorobotics, and molecular computation. Ribonucleic acid (RNA) origami has recently been realized as a new approach. Similar to DNA, RNA molecules can be designed to form complex 3D structures through complementary base pairings. RNA origami structures are, however, more compact and more thermodynamically stable due to RNA's non-canonical base pairing and tertiary interactions. With all these advantages, the development of RNA origami lags behind DNA origami by a large gap. Furthermore, although computational methods have proven to be effective in designing DNA and RNA origami structures and in their evaluation, advances in computational nucleic acid origami is even more limited. In this paper, we review major milestones in experimental and computational DNA and RNA origami and present current challenges in these fields. We believe collaboration between experimental nanotechnologists and computer scientists are critical for advancing these new research paradigms. PMID:26348196

  6. Limited rotational and rovibrational line lists computed with highly accurate quartic force fields and ab initio dipole surfaces.

    PubMed

    Fortenberry, Ryan C; Huang, Xinchuan; Schwenke, David W; Lee, Timothy J

    2014-02-01

    In this work, computational procedures are employed to compute the rotational and rovibrational spectra and line lists for H2O, CO2, and SO2. Building on the established use of quartic force fields, MP2 and CCSD(T) Dipole Moment Surfaces (DMSs) are computed for each system of study in order to produce line intensities as well as the transition energies. The computed results exhibit a clear correlation to reference data available in the HITRAN database. Additionally, even though CCSD(T) DMSs produce more accurate intensities as compared to experiment, the use of MP2 DMSs results in reliable line lists that are still comparable to experiment. The use of the less computationally costly MP2 method is beneficial in the study of larger systems where use of CCSD(T) would be more costly. PMID:23692860

  7. Accurate and Efficient Calculation of van der Waals Interactions Within Density Functional Theory by Local Atomic Potential Approach

    SciTech Connect

    Sun, Y. Y.; Kim, Y. H.; Lee, K.; Zhang, S. B.

    2008-01-01

    Density functional theory (DFT) in the commonly used local density or generalized gradient approximation fails to describe van der Waals (vdW) interactions that are vital to organic, biological, and other molecular systems. Here, we propose a simple, efficient, yet accurate local atomic potential (LAP) approach, named DFT+LAP, for including vdW interactions in the framework of DFT. The LAPs for H, C, N, and O are generated by fitting the DFT+LAP potential energy curves of small molecule dimers to those obtained from coupled cluster calculations with single, double, and perturbatively treated triple excitations, CCSD(T). Excellent transferability of the LAPs is demonstrated by remarkable agreement with the JSCH-2005 benchmark database [P. Jurecka et al. Phys. Chem. Chem. Phys. 8, 1985 (2006)], which provides the interaction energies of CCSD(T) quality for 165 vdW and hydrogen-bonded complexes. For over 100 vdW dominant complexes in this database, our DFT+LAP calculations give a mean absolute deviation from the benchmark results less than 0.5 kcal/mol. The DFT+LAP approach involves no extra computational cost other than standard DFT calculations and no modification of existing DFT codes, which enables straightforward quantum simulations, such as ab initio molecular dynamics, on biomolecular systems, as well as on other organic systems.

  8. Combining Theory and Experiment to Compute Highly Accurate Line Lists for Stable Molecules, and Purely AB Initio Theory to Compute Accurate Rotational and Rovibrational Line Lists for Transient Molecules

    NASA Astrophysics Data System (ADS)

    Lee, Timothy J.; Huang, Xinchuan; Fortenberry, Ryan C.; Schwenke, David W.

    2013-06-01

    Theoretical chemists have been computing vibrational and rovibrational spectra of small molecules for more than 40 years, but over the last decade the interest in this application has grown significantly. The increased interest in computing accurate rotational and rovibrational spectra for small molecules could not come at a better time, as NASA and ESA have begun to acquire a mountain of high-resolution spectra from the Herschel mission, and soon will from the SOFIA and JWST missions. In addition, the ground-based telescope, ALMA, has begun to acquire high-resolution spectra in the same time frame. Hence the need for highly accurate line lists for many small molecules, including their minor isotopologues, will only continue to increase. I will present the latest developments from our group on using the "Best Theory + High-Resolution Experimental Data" strategy to compute highly accurate rotational and rovibrational spectra for small molecules, including NH3, CO2, and SO2. I will also present the latest work from our group in producing purely ab initio line lists and spectroscopic constants for small molecules thought to exist in various astrophysical environments, but for which there is either limited or no high-resolution experimental data available. These more limited line lists include purely rotational transitions as well as rovibrational transitions for bands up through a few combination/overtones.

  9. A fourth order accurate finite difference scheme for the computation of elastic waves

    NASA Technical Reports Server (NTRS)

    Bayliss, A.; Jordan, K. E.; Lemesurier, B. J.; Turkel, E.

    1986-01-01

    A finite difference for elastic waves is introduced. The model is based on the first order system of equations for the velocities and stresses. The differencing is fourth order accurate on the spatial derivatives and second order accurate in time. The model is tested on a series of examples including the Lamb problem, scattering from plane interf aces and scattering from a fluid-elastic interface. The scheme is shown to be effective for these problems. The accuracy and stability is insensitive to the Poisson ratio. For the class of problems considered here it is found that the fourth order scheme requires for two-thirds to one-half the resolution of a typical second order scheme to give comparable accuracy.

  10. Introducing Computational Approaches in Intermediate Mechanics

    NASA Astrophysics Data System (ADS)

    Cook, David M.

    2006-12-01

    In the winter of 2003, we at Lawrence University moved Lagrangian mechanics and rigid body dynamics from a required sophomore course to an elective junior/senior course, freeing 40% of the time for computational approaches to ordinary differential equations (trajectory problems, the large amplitude pendulum, non-linear dynamics); evaluation of integrals (finding centers of mass and moment of inertia tensors, calculating gravitational potentials for various sources); and finding eigenvalues and eigenvectors of matrices (diagonalizing the moment of inertia tensor, finding principal axes), and to generating graphical displays of computed results. Further, students begin to use LaTeX to prepare some of their submitted problem solutions. Placed in the middle of the sophomore year, this course provides the background that permits faculty members as appropriate to assign computer-based exercises in subsequent courses. Further, students are encouraged to use our Computational Physics Laboratory on their own initiative whenever that use seems appropriate. (Curricular development supported in part by the W. M. Keck Foundation, the National Science Foundation, and Lawrence University.)

  11. Computer Forensics Education - the Open Source Approach

    NASA Astrophysics Data System (ADS)

    Huebner, Ewa; Bem, Derek; Cheung, Hon

    In this chapter we discuss the application of the open source software tools in computer forensics education at tertiary level. We argue that open source tools are more suitable than commercial tools, as they provide the opportunity for students to gain in-depth understanding and appreciation of the computer forensic process as opposed to familiarity with one software product, however complex and multi-functional. With the access to all source programs the students become more than just the consumers of the tools as future forensic investigators. They can also examine the code, understand the relationship between the binary images and relevant data structures, and in the process gain necessary background to become the future creators of new and improved forensic software tools. As a case study we present an advanced subject, Computer Forensics Workshop, which we designed for the Bachelor's degree in computer science at the University of Western Sydney. We based all laboratory work and the main take-home project in this subject on open source software tools. We found that without exception more than one suitable tool can be found to cover each topic in the curriculum adequately. We argue that this approach prepares students better for forensic field work, as they gain confidence to use a variety of tools, not just a single product they are familiar with.

  12. Computational Approaches for Predicting Biomedical Research Collaborations

    PubMed Central

    Zhang, Qing; Yu, Hong

    2014-01-01

    Biomedical research is increasingly collaborative, and successful collaborations often produce high impact work. Computational approaches can be developed for automatically predicting biomedical research collaborations. Previous works of collaboration prediction mainly explored the topological structures of research collaboration networks, leaving out rich semantic information from the publications themselves. In this paper, we propose supervised machine learning approaches to predict research collaborations in the biomedical field. We explored both the semantic features extracted from author research interest profile and the author network topological features. We found that the most informative semantic features for author collaborations are related to research interest, including similarity of out-citing citations, similarity of abstracts. Of the four supervised machine learning models (naïve Bayes, naïve Bayes multinomial, SVMs, and logistic regression), the best performing model is logistic regression with an ROC ranging from 0.766 to 0.980 on different datasets. To our knowledge we are the first to study in depth how research interest and productivities can be used for collaboration prediction. Our approach is computationally efficient, scalable and yet simple to implement. The datasets of this study are available at https://github.com/qingzhanggithub/medline-collaboration-datasets. PMID:25375164

  13. A computational language approach to modeling prose recall in schizophrenia.

    PubMed

    Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W; Elvevåg, Brita

    2014-06-01

    Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall. PMID:24709122

  14. Predicting microbial interactions through computational approaches.

    PubMed

    Li, Chenhao; Lim, Kun Ming Kenneth; Chng, Kern Rei; Nagarajan, Niranjan

    2016-06-01

    Microorganisms play a vital role in various ecosystems and characterizing interactions between them is an essential step towards understanding the organization and function of microbial communities. Computational prediction has recently become a widely used approach to investigate microbial interactions. We provide a thorough review of emerging computational methods organized by the type of data they employ. We highlight three major challenges in inferring interactions using metagenomic survey data and discuss the underlying assumptions and mathematics of interaction inference algorithms. In addition, we review interaction prediction methods relying on metabolic pathways, which are increasingly used to reveal mechanisms of interactions. Furthermore, we also emphasize the importance of mining the scientific literature for microbial interactions - a largely overlooked data source for experimentally validated interactions. PMID:27025964

  15. Sculpting the band gap: a computational approach

    PubMed Central

    Prasai, Kiran; Biswas, Parthapratim; Drabold, D. A.

    2015-01-01

    Materials with optimized band gap are needed in many specialized applications. In this work, we demonstrate that Hellmann-Feynman forces associated with the gap states can be used to find atomic coordinates that yield desired electronic density of states. Using tight-binding models, we show that this approach may be used to arrive at electronically designed models of amorphous silicon and carbon. We provide a simple recipe to include a priori electronic information in the formation of computer models of materials, and prove that this information may have profound structural consequences. The models are validated with plane-wave density functional calculations. PMID:26490203

  16. Sculpting the band gap: a computational approach.

    PubMed

    Prasai, Kiran; Biswas, Parthapratim; Drabold, D A

    2015-01-01

    Materials with optimized band gap are needed in many specialized applications. In this work, we demonstrate that Hellmann-Feynman forces associated with the gap states can be used to find atomic coordinates that yield desired electronic density of states. Using tight-binding models, we show that this approach may be used to arrive at electronically designed models of amorphous silicon and carbon. We provide a simple recipe to include a priori electronic information in the formation of computer models of materials, and prove that this information may have profound structural consequences. The models are validated with plane-wave density functional calculations. PMID:26490203

  17. Development and Validation of a Fast, Accurate and Cost-Effective Aeroservoelastic Method on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Goodwin, Sabine A.; Raj, P.

    1999-01-01

    Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.

  18. Computational modeling approaches in gonadotropin signaling.

    PubMed

    Ayoub, Mohammed Akli; Yvinec, Romain; Crépieux, Pascale; Poupon, Anne

    2016-07-01

    Follicle-stimulating hormone and LH play essential roles in animal reproduction. They exert their function through binding to their cognate receptors, which belong to the large family of G protein-coupled receptors. This recognition at the plasma membrane triggers a plethora of cellular events, whose processing and integration ultimately lead to an adapted biological response. Understanding the nature and the kinetics of these events is essential for innovative approaches in drug discovery. The study and manipulation of such complex systems requires the use of computational modeling approaches combined with robust in vitro functional assays for calibration and validation. Modeling brings a detailed understanding of the system and can also be used to understand why existing drugs do not work as well as expected, and how to design more efficient ones. PMID:27165991

  19. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    SciTech Connect

    Bonetto, Paola; Qi, Jinyi; Leahy, Richard M.

    1999-10-01

    We describe a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, we derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. We show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow us to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.

  20. Time-Accurate Computations of Isolated Circular Synthetic Jets in Crossflow

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Schaeffler, N. W.; Milanovic, I. M.; Zaman, K. B. M. Q.

    2007-01-01

    Results from unsteady Reynolds-averaged Navier-Stokes computations are described for two different synthetic jet flows issuing into a turbulent boundary layer crossflow through a circular orifice. In one case the jet effect is mostly contained within the boundary layer, while in the other case the jet effect extends beyond the boundary layer edge. Both cases have momentum flux ratios less than 2. Several numerical parameters are investigated, and some lessons learned regarding the CFD methods for computing these types of flow fields are summarized. Results in both cases are compared to experiment.

  1. Time-Accurate Computations of Isolated Circular Synthetic Jets in Crossflow

    NASA Technical Reports Server (NTRS)

    Rumsey, Christoper L.; Schaeffler, Norman W.; Milanovic, I. M.; Zaman, K. B. M. Q.

    2005-01-01

    Results from unsteady Reynolds-averaged Navier-Stokes computations are described for two different synthetic jet flows issuing into a turbulent boundary layer crossflow through a circular orifice. In one case the jet effect is mostly contained within the boundary layer, while in the other case the jet effect extends beyond the boundary layer edge. Both cases have momentum flux ratios less than 2. Several numerical parameters are investigated, and some lessons learned regarding the CFD methods for computing these types of flow fields are outlined. Results in both cases are compared to experiment.

  2. A novel approach for accurate prediction of spontaneous passage of ureteral stones: support vector machines.

    PubMed

    Dal Moro, F; Abate, A; Lanckriet, G R G; Arandjelovic, G; Gasparella, P; Bassi, P; Mancini, M; Pagano, F

    2006-01-01

    The objective of this study was to optimally predict the spontaneous passage of ureteral stones in patients with renal colic by applying for the first time support vector machines (SVM), an instance of kernel methods, for classification. After reviewing the results found in the literature, we compared the performances obtained with logistic regression (LR) and accurately trained artificial neural networks (ANN) to those obtained with SVM, that is, the standard SVM, and the linear programming SVM (LP-SVM); the latter techniques show an improved performance. Moreover, we rank the prediction factors according to their importance using Fisher scores and the LP-SVM feature weights. A data set of 1163 patients affected by renal colic has been analyzed and restricted to single out a statistically coherent subset of 402 patients. Nine clinical factors are used as inputs for the classification algorithms, to predict one binary output. The algorithms are cross-validated by training and testing on randomly selected train- and test-set partitions of the data and reporting the average performance on the test sets. The SVM-based approaches obtained a sensitivity of 84.5% and a specificity of 86.9%. The feature ranking based on LP-SVM gives the highest importance to stone size, stone position and symptom duration before check-up. We propose a statistically correct way of employing LR, ANN and SVM for the prediction of spontaneous passage of ureteral stones in patients with renal colic. SVM outperformed ANN, as well as LR. This study will soon be translated into a practical software toolbox for actual clinical usage. PMID:16374437

  3. Computer subroutine ISUDS accurately solves large system of simultaneous linear algebraic equations

    NASA Technical Reports Server (NTRS)

    Collier, G.

    1967-01-01

    Computer program, an Iterative Scheme Using a Direct Solution, obtains double precision accuracy using a single-precision coefficient matrix. ISUDS solves a system of equations written in matrix form as AX equals B, where A is a square non-singular coefficient matrix, X is a vector, and B is a vector.

  4. Computational approaches to motor learning by imitation.

    PubMed Central

    Schaal, Stefan; Ijspeert, Auke; Billard, Aude

    2003-01-01

    Movement imitation requires a complex set of mechanisms that map an observed movement of a teacher onto one's own movement apparatus. Relevant problems include movement recognition, pose estimation, pose tracking, body correspondence, coordinate transformation from external to egocentric space, matching of observed against previously learned movement, resolution of redundant degrees-of-freedom that are unconstrained by the observation, suitable movement representations for imitation, modularization of motor control, etc. All of these topics by themselves are active research problems in computational and neurobiological sciences, such that their combination into a complete imitation system remains a daunting undertaking-indeed, one could argue that we need to understand the complete perception-action loop. As a strategy to untangle the complexity of imitation, this paper will examine imitation purely from a computational point of view, i.e. we will review statistical and mathematical approaches that have been suggested for tackling parts of the imitation problem, and discuss their merits, disadvantages and underlying principles. Given the focus on action recognition of other contributions in this special issue, this paper will primarily emphasize the motor side of imitation, assuming that a perceptual system has already identified important features of a demonstrated movement and created their corresponding spatial information. Based on the formalization of motor control in terms of control policies and their associated performance criteria, useful taxonomies of imitation learning can be generated that clarify different approaches and future research directions. PMID:12689379

  5. A novel approach for accurate radiative transfer in cosmological hydrodynamic simulations

    NASA Astrophysics Data System (ADS)

    Petkova, Margarita; Springel, Volker

    2011-08-01

    We present a numerical implementation of radiative transfer based on an explicitly photon-conserving advection scheme, where radiative fluxes over the cell interfaces of a structured or unstructured mesh are calculated with a second-order reconstruction of the intensity field. The approach employs a direct discretization of the radiative transfer equation in Boltzmann form with adjustable angular resolution that, in principle, works equally well in the optically-thin and optically-thick regimes. In our most general formulation of the scheme, the local radiation field is decomposed into a linear sum of directional bins of equal solid angle, tessellating the unit sphere. Each of these 'cone fields' is transported independently, with constant intensity as a function of the direction within the cone. Photons propagate at the speed of light (or optionally using a reduced speed of light approximation to allow larger time-steps), yielding a fully time-dependent solution of the radiative transfer equation that can naturally cope with an arbitrary number of sources, as well as with scattering. The method casts sharp shadows, subject to the limitations induced by the adopted angular resolution. If the number of point sources is small and scattering is unimportant, our implementation can alternatively treat each source exactly in angular space, producing shadows whose sharpness is only limited by the grid resolution. A third hybrid alternative is to treat only a small number of the locally most luminous point sources explicitly, with the rest of the radiation intensity followed in a radiative diffusion approximation. We have implemented the method in the moving-mesh code AREPO, where it is coupled to the hydrodynamics in an operator-splitting approach that subcycles the radiative transfer alternately with the hydrodynamical evolution steps. We also discuss our treatment of basic photon sink processes relevant to cosmological reionization, with a chemical network that can

  6. A mechanistic approach for accurate simulation of village scale malaria transmission

    PubMed Central

    Bomblies, Arne; Duchemin, Jean-Bernard; Eltahir, Elfatih AB

    2009-01-01

    Background Malaria transmission models commonly incorporate spatial environmental and climate variability for making regional predictions of disease risk. However, a mismatch of these models' typical spatial resolutions and the characteristic scale of malaria vector population dynamics may confound disease risk predictions in areas of high spatial hydrological variability such as the Sahel region of Africa. Methods Field observations spanning two years from two Niger villages are compared. The two villages are separated by only 30 km but exhibit a ten-fold difference in anopheles mosquito density. These two villages would be covered by a single grid cell in many malaria models, yet their entomological activity differs greatly. Environmental conditions and associated entomological activity are simulated at high spatial- and temporal resolution using a mechanistic approach that couples a distributed hydrology scheme and an entomological model. Model results are compared to regular field observations of Anopheles gambiae sensu lato mosquito populations and local hydrology. The model resolves the formation and persistence of individual pools that facilitate mosquito breeding and predicts spatio-temporal mosquito population variability at high resolution using an agent-based modeling approach. Results Observations of soil moisture, pool size, and pool persistence are reproduced by the model. The resulting breeding of mosquitoes in the simulated pools yields time-integrated seasonal mosquito population dynamics that closely follow observations from captured mosquito abundance. Interannual difference in mosquito abundance is simulated, and the inter-village difference in mosquito population is reproduced for two years of observations. These modeling results emulate the known focal nature of malaria in Niger Sahel villages. Conclusion Hydrological variability must be represented at high spatial and temporal resolution to achieve accurate predictive ability of malaria risk

  7. Accurate Analysis and Computer Aided Design of Microstrip Dual Mode Resonators and Filters.

    NASA Astrophysics Data System (ADS)

    Grounds, Preston Whitfield, III

    1995-01-01

    Microstrip structures are of interest due to their many applications in microwave circuit design. Their small size and ease of connection to both passive and active components make them well suited for use in systems where size and space is at a premium. These include satellite communication systems, radar systems, satellite navigation systems, cellular phones and many others. In general, space is always a premium for any mobile system. Microstrip resonators find particular application in oscillators and filters. In typical filters each microstrip patch corresponds to one resonator. However, when dual mode patches are employed, each patch acts as two resonators and therefore reduces the amount of space required to build the filter. This dissertation focuses on the accurate electromagnetic analysis of the components of planar dual mode filters. Highly accurate analyses are required so that the resonator to resonator coupling and the resonator to input/output can be predicted with precision. Hence, filters can be built with a minimum of design iterations and tuning. The analysis used herein is an integral equation formulation in the spectral domain. The analysis is done in the spectral domain since the Green's function can be derived in closed form, and the spatial domain convolution becomes a simple product. The resulting set of equations is solved using the Method of Moments with Galerkin's procedure. The electromagnetic analysis is applied to range of problems including unloaded dual mode patches, dual mode patches coupled to microstrip feedlines, and complete filter structures. At each step calculated results are compared to measured results and good agreement is found. The calculated results are also compared to results from the circuit analysis program HP EESOF^{ rm TM} and again good agreement is found. A dual mode elliptic filter is built and good performance is obtained.

  8. Matrix-vector multiplication using digital partitioning for more accurate optical computing

    NASA Technical Reports Server (NTRS)

    Gary, C. K.

    1992-01-01

    Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.

  9. Iofetamine I 123 single photon emission computed tomography is accurate in the diagnosis of Alzheimer's disease

    SciTech Connect

    Johnson, K.A.; Holman, B.L.; Rosen, T.J.; Nagel, J.S.; English, R.J.; Growdon, J.H. )

    1990-04-01

    To determine the diagnostic accuracy of iofetamine hydrochloride I 123 (IMP) with single photon emission computed tomography in Alzheimer's disease, we studied 58 patients with AD and 15 age-matched healthy control subjects. We used a qualitative method to assess regional IMP uptake in the entire brain and to rate image data sets as normal or abnormal without knowledge of subjects'clinical classification. The sensitivity and specificity of IMP with single photon emission computed tomography in AD were 88% and 87%, respectively. In 15 patients with mild cognitive deficits (Blessed Dementia Scale score, less than or equal to 10), sensitivity was 80%. With the use of a semiquantitative measure of regional cortical IMP uptake, the parietal lobes were the most functionally impaired in AD and the most strongly associated with the patients' Blessed Dementia Scale scores. These results indicated that IMP with single photon emission computed tomography may be a useful adjunct in the clinical diagnosis of AD in early, mild disease.

  10. Numerical Computation of a Continuous-thrust State Transition Matrix Incorporating Accurate Hardware and Ephemeris Models

    NASA Technical Reports Server (NTRS)

    Ellison, Donald; Conway, Bruce; Englander, Jacob

    2015-01-01

    A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.

  11. A hybrid genetic algorithm-extreme learning machine approach for accurate significant wave height reconstruction

    NASA Astrophysics Data System (ADS)

    Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.

    2015-08-01

    Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).

  12. Accurate and Scalable O(N) Algorithm for First-Principles Molecular-Dynamics Computations on Large Parallel Computers

    SciTech Connect

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-01-01

    We present the first truly scalable first-principles molecular dynamics algorithm with O(N) complexity and controllable accuracy, capable of simulating systems with finite band gaps of sizes that were previously impossible with this degree of accuracy. By avoiding global communications, we provide a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wave functions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 101 952 atoms on 23 328 processors, with a wall-clock time of the order of 1 min per molecular dynamics time step and numerical error on the forces of less than 7x10-4 Ha/Bohr.

  13. Accurate and Scalable O(N) Algorithm for First-Principles Molecular-Dynamics Computations on Large Parallel Computers

    NASA Astrophysics Data System (ADS)

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-01-01

    We present the first truly scalable first-principles molecular dynamics algorithm with O(N) complexity and controllable accuracy, capable of simulating systems with finite band gaps of sizes that were previously impossible with this degree of accuracy. By avoiding global communications, we provide a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wave functions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 101 952 atoms on 23 328 processors, with a wall-clock time of the order of 1 min per molecular dynamics time step and numerical error on the forces of less than 7×10-4 Ha/Bohr.

  14. iTagPlot: an accurate computation and interactive drawing tool for tag density plot

    PubMed Central

    Kim, Sung-Hwan; Ezenwoye, Onyeka; Cho, Hwan-Gue; Robertson, Keith D.; Choi, Jeong-Hyeon

    2015-01-01

    Motivation: Tag density plots are very important to intuitively reveal biological phenomena from capture-based sequencing data by visualizing the normalized read depth in a region. Results: We have developed iTagPlot to compute tag density across functional features in parallel using multicores and a grid engine and to interactively explore it in a graphical user interface. It allows us to stratify features by defining groups based on biological function and measurement, summary statistics and unsupervised clustering. Availability and implementation: http://sourceforge.net/projects/itagplot/. Contact: jechoi@gru.edu and jeochoi@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25792550

  15. Accurate Experiment to Computation Coupling for Understanding QH-mode physics using NIMROD

    NASA Astrophysics Data System (ADS)

    King, J. R.; Burrell, K. H.; Garofalo, A. M.; Groebner, R. J.; Hanson, J. D.; Hebert, J. D.; Hudson, S. R.; Pankin, A. Y.; Kruger, S. E.; Snyder, P. B.

    2015-11-01

    It is desirable to have an ITER H-mode regime that is quiescent to edge-localized modes (ELMs). The quiescent H-mode (QH-mode) with edge harmonic oscillations (EHO) is one such regime. High quality equilibria are essential for accurate EHO simulations with initial-value codes such as NIMROD. We include profiles outside the LCFS which generate associated currents when we solve the Grad-Shafranov equation with open-flux regions using the NIMEQ solver. The new solution is an equilibrium that closely resembles the original reconstruction (which does not contain open-flux currents). This regenerated equilibrium is consistent with the profiles that are measured by the high quality diagnostics on DIII-D. Results from nonlinear NIMROD simulations of the EHO are presented. The full measured rotation profiles are included in the simulation. The simulation develops into a saturated state. The saturation mechanism of the EHO is explored and simulation is compared to magnetic-coil measurements. This work is currently supported in part by the US DOE Office of Science under awards DE-FC02-04ER54698, DE-AC02-09CH11466 and the SciDAC Center for Extended MHD Modeling.

  16. Gravitational Focusing and the Computation of an Accurate Moon/Mars Cratering Ratio

    NASA Technical Reports Server (NTRS)

    Matney, Mark J.

    2006-01-01

    There have been a number of attempts to use asteroid populations to simultaneously compute cratering rates on the Moon and bodies elsewhere in the Solar System to establish the cratering ratio (e.g., [1],[2]). These works use current asteroid orbit population databases combined with collision rate calculations based on orbit intersections alone. As recent work on meteoroid fluxes [3] have highlighted, however, collision rates alone are insufficient to describe the cratering rates on planetary surfaces - especially planets with stronger gravitational fields than the Moon, such as Earth and Mars. Such calculations also need to include the effects of gravitational focusing, whereby the spatial density of the slower-moving impactors is preferentially "focused" by the gravity of the body. This leads overall to higher fluxes and cratering rates, and is highly dependent on the detailed velocity distributions of the impactors. In this paper, a comprehensive gravitational focusing algorithm originally developed to describe fluxes of interplanetary meteoroids [3] is applied to the collision rates and cratering rates of populations of asteroids and long-period comets to compute better cratering ratios for terrestrial bodies in the Solar System. These results are compared to the calculations of other researchers.

  17. Making it Easy to Construct Accurate Hydrological Models that Exploit High Performance Computers (Invited)

    NASA Astrophysics Data System (ADS)

    Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.

    2013-12-01

    This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.

  18. A computational approach to negative priming

    NASA Astrophysics Data System (ADS)

    Schrobsdorff, H.; Ihrke, M.; Kabisch, B.; Behrendt, J.; Hasselhorn, M.; Herrmann, J. Michael

    2007-09-01

    Priming is characterized by a sensitivity of reaction times to the sequence of stimuli in psychophysical experiments. The reduction of the reaction time observed in positive priming is well-known and experimentally understood (Scarborough et al., J. Exp. Psycholol: Hum. Percept. Perform., 3, pp. 1-17, 1977). Negative priming—the opposite effect—is experimentally less tangible (Fox, Psychonom. Bull. Rev., 2, pp. 145-173, 1995). The dependence on subtle parameter changes (such as response-stimulus interval) usually varies. The sensitivity of the negative priming effect bears great potential for applications in research in fields such as memory, selective attention, and ageing effects. We develop and analyse a computational realization, CISAM, of a recent psychological model for action decision making, the ISAM (Kabisch, PhD thesis, Friedrich-Schiller-Universitat, 2003), which is sensitive to priming conditions. With the dynamical systems approach of the CISAM, we show that a single adaptive threshold mechanism is sufficient to explain both positive and negative priming effects. This is achieved by comparing results obtained by the computational modelling with experimental data from our laboratory. The implementation provides a rich base from which testable predictions can be derived, e.g. with respect to hitherto untested stimulus combinations (e.g. single-object trials).

  19. Time-Accurate Computational Fluid Dynamics Simulation of a Pair of Moving Solid Rocket Boosters

    NASA Technical Reports Server (NTRS)

    Strutzenberg, Louise L.; Williams, Brandon R.

    2011-01-01

    Since the Columbia accident, the threat to the Shuttle launch vehicle from debris during the liftoff timeframe has been assessed by the Liftoff Debris Team at NASA/MSFC. In addition to engineering methods of analysis, CFD-generated flow fields during the liftoff timeframe have been used in conjunction with 3-DOF debris transport methods to predict the motion of liftoff debris. Early models made use of a quasi-steady flow field approximation with the vehicle positioned at a fixed location relative to the ground; however, a moving overset mesh capability has recently been developed for the Loci/CHEM CFD software which enables higher-fidelity simulation of the Shuttle transient plume startup and liftoff environment. The present work details the simulation of the launch pad and mobile launch platform (MLP) with truncated solid rocket boosters (SRBs) moving in a prescribed liftoff trajectory derived from Shuttle flight measurements. Using Loci/CHEM, time-accurate RANS and hybrid RANS/LES simulations were performed for the timeframe T0+0 to T0+3.5 seconds, which consists of SRB startup to a vehicle altitude of approximately 90 feet above the MLP. Analysis of the transient flowfield focuses on the evolution of the SRB plumes in the MLP plume holes and the flame trench, impingement on the flame deflector, and especially impingment on the MLP deck resulting in upward flow which is a transport mechanism for debris. The results show excellent qualitative agreement with the visual record from past Shuttle flights, and comparisons to pressure measurements in the flame trench and on the MLP provide confidence in these simulation capabilities.

  20. A model for the accurate computation of the lateral scattering of protons in water.

    PubMed

    Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T

    2016-02-21

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time. PMID:26808380

  1. A model for the accurate computation of the lateral scattering of protons in water

    NASA Astrophysics Data System (ADS)

    Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.

    2016-02-01

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.

  2. Efficiency and Accuracy of Time-Accurate Turbulent Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Sanetrik, Mark D.; Biedron, Robert T.; Melson, N. Duane; Parlette, Edward B.

    1995-01-01

    The accuracy and efficiency of two types of subiterations in both explicit and implicit Navier-Stokes codes are explored for unsteady laminar circular-cylinder flow and unsteady turbulent flow over an 18-percent-thick circular-arc (biconvex) airfoil. Grid and time-step studies are used to assess the numerical accuracy of the methods. Nonsubiterative time-stepping schemes and schemes with physical time subiterations are subject to time-step limitations in practice that are removed by pseudo time sub-iterations. Computations for the circular-arc airfoil indicate that a one-equation turbulence model predicts the unsteady separated flow better than an algebraic turbulence model; also, the hysteresis with Mach number of the self-excited unsteadiness due to shock and boundary-layer separation is well predicted.

  3. Computer-implemented system and method for automated and highly accurate plaque analysis, reporting, and visualization

    NASA Technical Reports Server (NTRS)

    Kemp, James Herbert (Inventor); Talukder, Ashit (Inventor); Lambert, James (Inventor); Lam, Raymond (Inventor)

    2008-01-01

    A computer-implemented system and method of intra-oral analysis for measuring plaque removal is disclosed. The system includes hardware for real-time image acquisition and software to store the acquired images on a patient-by-patient basis. The system implements algorithms to segment teeth of interest from surrounding gum, and uses a real-time image-based morphing procedure to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The system integrates these components into a single software suite with an easy-to-use graphical user interface (GUI) that allows users to do an end-to-end run of a patient record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image.

  4. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour

    PubMed Central

    Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  5. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    PubMed

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  6. An accurate and scalable O(N) algorithm for First-Principles Molecular Dynamics computations on petascale computers and beyond

    NASA Astrophysics Data System (ADS)

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-03-01

    We present a truly scalable First-Principles Molecular Dynamics algorithm with O(N) complexity and fully controllable accuracy, capable of simulating systems of sizes that were previously impossible with this degree of accuracy. By avoiding global communication, we have extended W. Kohn's condensed matter ``nearsightedness'' principle to a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wavefunctions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 100,000 atoms on 100,000 processors, with a wall-clock time of the order of one minute per molecular dynamics time step. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  7. Accurate computation of the radiation from simple antennas using the finite-difference time-domain method

    NASA Astrophysics Data System (ADS)

    Maloney, James G.; Smith, Glenn S.; Scott, Waymond R., Jr.

    1990-07-01

    Two antennas are considered, a cylindrical monopole and a conical monopole. Both are driven through an image plane from a coaxial transmission line. Each of these antennas corresponds to a well-posed theoretical electromagnetic boundary value problem and a realizable experimental model. These antennas are analyzed by a straightforward application of the time-domain finite-difference method. The computed results for these antennas are shown to be in excellent agreement with accurate experimental measurements for both the time domain and the frequency domain. The graphical displays presented for the transient near-zone and far-zone radiation from these antennas provide physical insight into the radiation process.

  8. Computational approaches to predict bacteriophage-host relationships.

    PubMed

    Edwards, Robert A; McNair, Katelyn; Faust, Karoline; Raes, Jeroen; Dutilh, Bas E

    2016-03-01

    Metagenomics has changed the face of virus discovery by enabling the accurate identification of viral genome sequences without requiring isolation of the viruses. As a result, metagenomic virus discovery leaves the first and most fundamental question about any novel virus unanswered: What host does the virus infect? The diversity of the global virosphere and the volumes of data obtained in metagenomic sequencing projects demand computational tools for virus-host prediction. We focus on bacteriophages (phages, viruses that infect bacteria), the most abundant and diverse group of viruses found in environmental metagenomes. By analyzing 820 phages with annotated hosts, we review and assess the predictive power of in silico phage-host signals. Sequence homology approaches are the most effective at identifying known phage-host pairs. Compositional and abundance-based methods contain significant signal for phage-host classification, providing opportunities for analyzing the unknowns in viral metagenomes. Together, these computational approaches further our knowledge of the interactions between phages and their hosts. Importantly, we find that all reviewed signals significantly link phages to their hosts, illustrating how current knowledge and insights about the interaction mechanisms and ecology of coevolving phages and bacteria can be exploited to predict phage-host relationships, with potential relevance for medical and industrial applications. PMID:26657537

  9. Highly Accurate Frequency Calculations of Crab Cavities Using the VORPAL Computational Framework

    SciTech Connect

    Austin, T.M.; Cary, J.R.; Bellantoni, L.; /Argonne

    2009-05-01

    We have applied the Werner-Cary method [J. Comp. Phys. 227, 5200-5214 (2008)] for extracting modes and mode frequencies from time-domain simulations of crab cavities, as are needed for the ILC and the beam delivery system of the LHC. This method for frequency extraction relies on a small number of simulations, and post-processing using the SVD algorithm with Tikhonov regularization. The time-domain simulations were carried out using the VORPAL computational framework, which is based on the eminently scalable finite-difference time-domain algorithm. A validation study was performed on an aluminum model of the 3.9 GHz RF separators built originally at Fermi National Accelerator Laboratory in the US. Comparisons with measurements of the A15 cavity show that this method can provide accuracy to within 0.01% of experimental results after accounting for manufacturing imperfections. To capture the near degeneracies two simulations, requiring in total a few hours on 600 processors were employed. This method has applications across many areas including obtaining MHD spectra from time-domain simulations.

  10. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    PubMed Central

    Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.

    2015-01-01

    Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational. PMID:25615870

  11. Towards an accurate and computationally-efficient modelling of Fe(II)-based spin crossover materials.

    PubMed

    Vela, Sergi; Fumanal, Maria; Ribas-Arino, Jordi; Robert, Vincent

    2015-07-01

    The DFT + U methodology is regarded as one of the most-promising strategies to treat the solid state of molecular materials, as it may provide good energetic accuracy at a moderate computational cost. However, a careful parametrization of the U-term is mandatory since the results may be dramatically affected by the selected value. Herein, we benchmarked the Hubbard-like U-term for seven Fe(ii)N6-based pseudo-octahedral spin crossover (SCO) compounds, using as a reference an estimation of the electronic enthalpy difference (ΔHelec) extracted from experimental data (T1/2, ΔS and ΔH). The parametrized U-value obtained for each of those seven compounds ranges from 2.37 eV to 2.97 eV, with an average value of U = 2.65 eV. Interestingly, we have found that this average value can be taken as a good starting point since it leads to an unprecedented mean absolute error (MAE) of only 4.3 kJ mol(-1) in the evaluation of ΔHelec for the studied compounds. Moreover, by comparing our results on the solid state and the gas phase of the materials, we quantify the influence of the intermolecular interactions on the relative stability of the HS and LS states, with an average effect of ca. 5 kJ mol(-1), whose sign cannot be generalized. Overall, the findings reported in this manuscript pave the way for future studies devoted to understand the crystalline phase of SCO compounds, or the adsorption of individual molecules on organic or metallic surfaces, in which the rational incorporation of the U-term within DFT + U yields the required energetic accuracy that is dramatically missing when using bare-DFT functionals. PMID:26040609

  12. Sampling strategies for accurate computational inferences of gametic phase across highly polymorphic major histocompatibility complex loci

    PubMed Central

    2011-01-01

    Background Genes of the Major Histocompatibility Complex (MHC) are very popular genetic markers among evolutionary biologists because of their potential role in pathogen confrontation and sexual selection. However, MHC genotyping still remains challenging and time-consuming in spite of substantial methodological advances. Although computational haplotype inference has brought into focus interesting alternatives, high heterozygosity, extensive genetic variation and population admixture are known to cause inaccuracies. We have investigated the role of sample size, genetic polymorphism and genetic structuring on the performance of the popular Bayesian PHASE algorithm. To cover this aim, we took advantage of a large database of known genotypes (using traditional laboratory-based techniques) at single MHC class I (N = 56 individuals and 50 alleles) and MHC class II B (N = 103 individuals and 62 alleles) loci in the lesser kestrel Falco naumanni. Findings Analyses carried out over real MHC genotypes showed that the accuracy of gametic phase reconstruction improved with sample size as a result of the reduction in the allele to individual ratio. We then simulated different data sets introducing variations in this parameter to define an optimal ratio. Conclusions Our results demonstrate a critical influence of the allele to individual ratio on PHASE performance. We found that a minimum allele to individual ratio (1:2) yielded 100% accuracy for both MHC loci. Sampling effort is therefore a crucial step to obtain reliable MHC haplotype reconstructions and must be accomplished accordingly to the degree of MHC polymorphism. We expect our findings provide a foothold into the design of straightforward and cost-effective genotyping strategies of those MHC loci from which locus-specific primers are available. PMID:21615903

  13. Accurate micro-computed tomography imaging of pore spaces in collagen-based scaffold.

    PubMed

    Zidek, Jan; Vojtova, Lucy; Abdel-Mohsen, A M; Chmelik, Jiri; Zikmund, Tomas; Brtnikova, Jana; Jakubicek, Roman; Zubal, Lukas; Jan, Jiri; Kaiser, Jozef

    2016-06-01

    In this work we have used X-ray micro-computed tomography (μCT) as a method to observe the morphology of 3D porous pure collagen and collagen-composite scaffolds useful in tissue engineering. Two aspects of visualizations were taken into consideration: improvement of the scan and investigation of its sensitivity to the scan parameters. Due to the low material density some parts of collagen scaffolds are invisible in a μCT scan. Therefore, here we present different contrast agents, which increase the contrast of the scanned biopolymeric sample for μCT visualization. The increase of contrast of collagenous scaffolds was performed with ceramic hydroxyapatite microparticles (HAp), silver ions (Ag(+)) and silver nanoparticles (Ag-NPs). Since a relatively small change in imaging parameters (e.g. in 3D volume rendering, threshold value and μCT acquisition conditions) leads to a completely different visualized pattern, we have optimized these parameters to obtain the most realistic picture for visual and qualitative evaluation of the biopolymeric scaffold. Moreover, scaffold images were stereoscopically visualized in order to better see the 3D biopolymer composite scaffold morphology. However, the optimized visualization has some discontinuities in zoomed view, which can be problematic for further analysis of interconnected pores by commonly used numerical methods. Therefore, we applied the locally adaptive method to solve discontinuities issue. The combination of contrast agent and imaging techniques presented in this paper help us to better understand the structure and morphology of the biopolymeric scaffold that is crucial in the design of new biomaterials useful in tissue engineering. PMID:27153826

  14. Accurate guidance for percutaneous access to a specific target in soft tissues: preclinical study of computer-assisted pericardiocentesis.

    PubMed

    Chavanon, O; Barbe, C; Troccaz, J; Carrat, L; Ribuot, C; Noirclerc, M; Maitrasse, B; Blin, D

    1999-06-01

    In the field of percutaneous access to soft tissues, our project was to improve classical pericardiocentesis by performing accurate guidance to a selected target, according to a model of the pericardial effusion acquired through three-dimensional (3D) data recording. Required hardware is an echocardiographic device and a needle, both linked to a 3D localizer, and a computer. After acquiring echographic data, a modeling procedure allows definition of the optimal puncture strategy, taking into consideration the mobility of the heart, by determining a stable region, whatever the period of the cardiac cycle. A passive guidance system is then used to reach the planned target accurately, generally a site in the middle of the stable region. After validation on a dynamic phantom and a feasibility study in dogs, an accuracy and reliability analysis protocol was realized on pigs with experimental pericardial effusion. Ten consecutive successful punctures using various trajectories were performed on eight pigs. Nonbloody liquid was collected from pericardial effusions in the stable region (5 to 9 mm wide) within 10 to 15 minutes from echographic acquisition to drainage. Accuracy of at least 2.5 mm was demonstrated. This study demonstrates the feasibility of computer-assisted pericardiocentesis. Beyond the simple improvement of the current technique, this method could be a new way to reach the heart or a new tool for percutaneous access and image-guided puncture of soft tissues. Further investigation will be necessary before routine human application. PMID:10414543

  15. Accurate treatments of electrostatics for computer simulations of biological systems: A brief survey of developments and existing problems

    NASA Astrophysics Data System (ADS)

    Yi, Sha-Sha; Pan, Cong; Hu, Zhong-Han

    2015-12-01

    Modern computer simulations of biological systems often involve an explicit treatment of the complex interactions among a large number of molecules. While it is straightforward to compute the short-ranged Van der Waals interaction in classical molecular dynamics simulations, it has been a long-lasting issue to develop accurate methods for the longranged Coulomb interaction. In this short review, we discuss three types of methodologies for the accurate treatment of electrostatics in simulations of explicit molecules: truncation-type methods, Ewald-type methods, and mean-field-type methods. Throughout the discussion, we brief the formulations and developments of these methods, emphasize the intrinsic connections among the three types of methods, and focus on the existing problems which are often associated with the boundary conditions of electrostatics. This brief survey is summarized with a short perspective on future trends along the method developments and applications in the field of biological simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 91127015 and 21522304) and the Open Project from the State Key Laboratory of Theoretical Physics, and the Innovation Project from the State Key Laboratory of Supramolecular Structure and Materials.

  16. A computationally efficient and accurate numerical representation of thermodynamic properties of steam and water for computations of non-equilibrium condensing steam flow in steam turbines

    NASA Astrophysics Data System (ADS)

    Hrubý, Jan

    2012-04-01

    Mathematical modeling of the non-equilibrium condensing transonic steam flow in the complex 3D geometry of a steam turbine is a demanding problem both concerning the physical concepts and the required computational power. Available accurate formulations of steam properties IAPWS-95 and IAPWS-IF97 require much computation time. For this reason, the modelers often accept the unrealistic ideal-gas behavior. Here we present a computation scheme based on a piecewise, thermodynamically consistent representation of the IAPWS-95 formulation. Density and internal energy are chosen as independent variables to avoid variable transformations and iterations. On the contrary to the previous Tabular Taylor Series Expansion Method, the pressure and temperature are continuous functions of the independent variables, which is a desirable property for the solution of the differential equations of the mass, energy, and momentum conservation for both phases.

  17. A More Accurate and Efficient Technique Developed for Using Computational Methods to Obtain Helical Traveling-Wave Tube Interaction Impedance

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    1999-01-01

    The phenomenal growth of commercial communications has created a great demand for traveling-wave tube (TWT) amplifiers. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. For the first time, an accurate three-dimensional helical model was developed that allows accurate prediction of TWT cold-test characteristics including operating frequency, interaction impedance, and attenuation. This computational model, which was developed at the NASA Lewis Research Center, allows TWT designers to obtain a more accurate value of interaction impedance than is possible using experimental methods. Obtaining helical slow-wave circuit interaction impedance is an important part of the design process for a TWT because it is related to the gain and efficiency of the tube. This impedance cannot be measured directly; thus, conventional methods involve perturbing a helical circuit with a cylindrical dielectric rod placed on the central axis of the circuit and obtaining the difference in resonant frequency between the perturbed and unperturbed circuits. A mathematical relationship has been derived between this frequency difference and the interaction impedance (ref. 1). However, because of the complex configuration of the helical circuit, deriving this relationship involves several approximations. In addition, this experimental procedure is time-consuming and expensive, but until recently it was widely accepted as the most accurate means of determining interaction impedance. The advent of an accurate three-dimensional helical circuit model (ref. 2) made it possible for Lewis researchers to fully investigate standard approximations made in deriving the relationship between measured perturbation data and interaction impedance. The most prominent approximations made

  18. IGS-global ionospheric maps for accurate computation of GPS single- frequency ionospheric delay-simulation study

    NASA Astrophysics Data System (ADS)

    Farah, A.

    The Ionospheric delay is still one of the largest sources of error that affects the positioning accuracy of any satellite positioning system. This problem could be solved due to the dispersive nature of the Ionosphere by combining simultaneous measurements of signals at two different frequencies but it is still there for single- frequency users. Much effort has been made in establishing models for single- frequency users to make this effect as small as possible. These models vary in accuracy, input data and computational complexity, so the choice between the different models depends on the individual circumstances of the user. From the simulation point of view, the model needed should be accurate with a global coverage and good description to the Ionosphere's variable nature with both time and location. The author reviews some of these established models, starting with the BENT model, the Klobuchar model and the IRI (International Reference Ionosphere) model. Since quiet a long time, Klobuchar model considers the most widely used model ever in this field, due to its simplicity and time saving. Any GPS user could find Klobuchar model's coefficients in the broadcast navigation message. CODE, Centre for Orbit Determination in Europe provides a new set of coefficients for Klobuchar model, which gives more accurate results for the Ionospheric delay computation. IGS (International GPS Service) services include providing GPS community with a global Ionospheric maps in IONEX-format (IONosphere Map Exchange format) which enables the computation of the Ionospheric delay at the desired location and time. The study was undertaken from GPS-data simulation point of view. The aim was to select a model for the simulation of GPS data that gives a good description of the Ionosphere's nature with a high degree of accuracy in computing the Ionospheric delay that yields to better-simulated data. A new model developed by the author based on IGS global Ionospheric maps. A comparison

  19. A procedure for computing accurate ab initio quartic force fields: Application to HO2+ and H2O

    NASA Astrophysics Data System (ADS)

    Huang, Xinchuan; Lee, Timothy J.

    2008-07-01

    A procedure for the calculation of molecular quartic force fields (QFFs) is proposed and investigated. The goal is to generate highly accurate ab initio QFFs that include many of the so-called ``small'' effects that are necessary to achieve high accuracy. The small effects investigated in the present study include correlation of the core electrons (core correlation), extrapolation to the one-particle basis set limit, correction for scalar relativistic contributions, correction for higher-order correlation effects, and inclusion of diffuse functions in the one-particle basis set. The procedure is flexible enough to allow for some effects to be computed directly, while others may be added as corrections. A single grid of points is used and is centered about an initial reference geometry that is designed to be as close as possible to the final ab initio equilibrium structure (with all effects included). It is shown that the least-squares fit of the QFF is not compromised by the added corrections, and the balance between elimination of contamination from higher-order force constants while retaining energy differences large enough to yield meaningful quartic force constants is essentially unchanged from the standard procedures we have used for many years. The initial QFF determined from the least-squares fit is transformed to the exact minimum in order to eliminate gradient terms and allow for the use of second-order perturbation theory for evaluation of spectroscopic constants. It is shown that this step has essentially no effect on the quality of the QFF largely because the initial reference structure is, by design, very close to the final ab initio equilibrium structure. The procedure is used to compute an accurate, purely ab initio QFF for the H2O molecule, which is used as a benchmark test case. The procedure is then applied to the ground and first excited electronic states of the HO2+ molecular cation. Fundamental vibrational frequencies and spectroscopic

  20. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be

  1. Accurate Vehicle Location System Using RFID, an Internet of Things Approach

    PubMed Central

    Prinsloo, Jaco; Malekian, Reza

    2016-01-01

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved. PMID:27271638

  2. Accurate Vehicle Location System Using RFID, an Internet of Things Approach.

    PubMed

    Prinsloo, Jaco; Malekian, Reza

    2016-01-01

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved. PMID:27271638

  3. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    SciTech Connect

    Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.

    2015-01-01

    The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  4. Three-dimensional shape measurement with a fast and accurate approach

    SciTech Connect

    Wang Zhaoyang; Du Hua; Park, Seungbae; Xie Huimin

    2009-02-20

    A noncontact, fast, accurate, low-cost, broad-range, full-field, easy-to-implement three-dimensional (3D) shape measurement technique is presented. The technique is based on a generalized fringe projection profilometry setup that allows each system component to be arbitrarily positioned. It employs random phase-shifting, multifrequency projection fringes, ultrafast direct phase unwrapping, and inverse self-calibration schemes to perform 3D shape determination with enhanced accuracy in a fast manner. The relative measurement accuracy can reach 1/10,000 or higher, and the acquisition speed is faster than two 3D views per second. The validity and practicability of the proposed technique have been verified by experiments. Because of its superior capability, the proposed 3D shape measurement technique is suitable for numerous applications in a variety of fields.

  5. Digital test signal generation: An accurate SNR calibration approach for the DSN

    NASA Technical Reports Server (NTRS)

    Gutierrez-Luaces, B. O.

    1991-01-01

    A new method of generating analog test signals with accurate signal to noise ratios (SNRs) is described. High accuracy will be obtained by simultaneous generation of digital noise and signal spectra at a given baseband or bandpass limited bandwidth. The digital synthesis will provide a test signal embedded in noise with the statistical properties of a stationary random process. Accuracy will only be dependent on test integration time with a limit imposed by the system quantization noise (expected to be 0.02 dB). Setability will be approximately 0.1 dB. The first digital SNR generator to provide baseband test signals is being built and will be available in early 1991.

  6. Toward an Accurate Estimate of the Exfoliation Energy of Black Phosphorus: A Periodic Quantum Chemical Approach.

    PubMed

    Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti

    2016-01-01

    The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems. PMID:26651397

  7. Advances in Proteomics Data Analysis and Display Using an Accurate Mass and Time Tag Approach

    PubMed Central

    Zimmer, Jennifer S.D.; Monroe, Matthew E.; Qian, Wei-Jun; Smith, Richard D.

    2007-01-01

    Proteomics has recently demonstrated utility in understanding cellular processes on the molecular level as a component of systems biology approaches and for identifying potential biomarkers of various disease states. The large amount of data generated by utilizing high efficiency (e.g., chromatographic) separations coupled to high mass accuracy mass spectrometry for high-throughput proteomics analyses presents challenges related to data processing, analysis, and display. This review focuses on recent advances in nanoLC-FTICR-MS-based proteomics approaches and the accompanying data processing tools that have been developed to display and interpret the large volumes of data being produced. PMID:16429408

  8. A Stationary Wavelet Entropy-Based Clustering Approach Accurately Predicts Gene Expression

    PubMed Central

    Nguyen, Nha; Vo, An; Choi, Inchan

    2015-01-01

    Abstract Studying epigenetic landscapes is important to understand the condition for gene regulation. Clustering is a useful approach to study epigenetic landscapes by grouping genes based on their epigenetic conditions. However, classical clustering approaches that often use a representative value of the signals in a fixed-sized window do not fully use the information written in the epigenetic landscapes. Clustering approaches to maximize the information of the epigenetic signals are necessary for better understanding gene regulatory environments. For effective clustering of multidimensional epigenetic signals, we developed a method called Dewer, which uses the entropy of stationary wavelet of epigenetic signals inside enriched regions for gene clustering. Interestingly, the gene expression levels were highly correlated with the entropy levels of epigenetic signals. Dewer separates genes better than a window-based approach in the assessment using gene expression and achieved a correlation coefficient above 0.9 without using any training procedure. Our results show that the changes of the epigenetic signals are useful to study gene regulation. PMID:25383910

  9. An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS

    PubMed Central

    Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu

    2015-01-01

    With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller. PMID:26690154

  10. An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS.

    PubMed

    Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu

    2015-01-01

    With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller. PMID:26690154

  11. An efficient and accurate approach to MTE-MART for time-resolved tomographic PIV

    NASA Astrophysics Data System (ADS)

    Lynch, K. P.; Scarano, F.

    2015-03-01

    The motion-tracking-enhanced MART (MTE-MART; Novara et al. in Meas Sci Technol 21:035401, 2010) has demonstrated the potential to increase the accuracy of tomographic PIV by the combined use of a short sequence of non-simultaneous recordings. A clear bottleneck of the MTE-MART technique has been its computational cost. For large datasets comprising time-resolved sequences, MTE-MART becomes unaffordable and has been barely applied even for the analysis of densely seeded tomographic PIV datasets. A novel implementation is proposed for tomographic PIV image sequences, which strongly reduces the computational burden of MTE-MART, possibly below that of regular MART. The method is a sequential algorithm that produces a time-marching estimation of the object intensity field based on an enhanced guess, which is built upon the object reconstructed at the previous time instant. As the method becomes effective after a number of snapshots (typically 5-10), the sequential MTE-MART (SMTE) is most suited for time-resolved sequences. The computational cost reduction due to SMTE simply stems from the fewer MART iterations required for each time instant. Moreover, the method yields superior reconstruction quality and higher velocity field measurement precision when compared with both MART and MTE-MART. The working principle is assessed in terms of computational effort, reconstruction quality and velocity field accuracy with both synthetic time-resolved tomographic images of a turbulent boundary layer and two experimental databases documented in the literature. The first is the time-resolved data of flow past an airfoil trailing edge used in the study of Novara and Scarano (Exp Fluids 52:1027-1041, 2012); the second is a swirling jet in a water flow. In both cases, the effective elimination of ghost particles is demonstrated in number and intensity within a short temporal transient of 5-10 frames, depending on the seeding density. The increased value of the velocity space

  12. An Approach to More Accurate Model Systems for Purple Acid Phosphatases (PAPs).

    PubMed

    Bernhardt, Paul V; Bosch, Simone; Comba, Peter; Gahan, Lawrence R; Hanson, Graeme R; Mereacre, Valeriu; Noble, Christopher J; Powell, Annie K; Schenk, Gerhard; Wadepohl, Hubert

    2015-08-01

    The active site of mammalian purple acid phosphatases (PAPs) have a dinuclear iron site in two accessible oxidation states (Fe(III)2 and Fe(III)Fe(II)), and the heterovalent is the active form, involved in the regulation of phosphate and phosphorylated metabolite levels in a wide range of organisms. Therefore, two sites with different coordination geometries to stabilize the heterovalent active form and, in addition, with hydrogen bond donors to enable the fixation of the substrate and release of the product, are believed to be required for catalytically competent model systems. Two ligands and their dinuclear iron complexes have been studied in detail. The solid-state structures and properties, studied by X-ray crystallography, magnetism, and Mössbauer spectroscopy, and the solution structural and electronic properties, investigated by mass spectrometry, electronic, nuclear magnetic resonance (NMR), electron paramagnetic resonance (EPR), and Mössbauer spectroscopies and electrochemistry, are discussed in detail in order to understand the structures and relative stabilities in solution. In particular, with one of the ligands, a heterovalent Fe(III)Fe(II) species has been produced by chemical oxidation of the Fe(II)2 precursor. The phosphatase reactivities of the complexes, in particular, also of the heterovalent complex, are reported. These studies include pH-dependent as well as substrate concentration dependent studies, leading to pH profiles, catalytic efficiencies and turnover numbers, and indicate that the heterovalent diiron complex discussed here is an accurate PAP model system. PMID:26196255

  13. Accurate Waveforms for Non-spinning Binary Black Holes using the Effective-one-body Approach

    NASA Technical Reports Server (NTRS)

    Buonanno, Alessandra; Pan, Yi; Baker, John G.; Centrella, Joan; Kelly, Bernard J.; McWilliams, Sean T.; vanMeter, James R.

    2007-01-01

    Using numerical relativity as guidance and the natural flexibility of the effective-one-body (EOB) model, we extend the latter so that it can successfully match the numerical relativity waveforms of non-spinning binary black holes during the last stages of inspiral, merger and ringdown. Here, by successfully, we mean with phase differences < or approx. 8% of a gravitational-wave cycle accumulated until the end of the ringdown phase. We obtain this result by simply adding a 4 post-Newtonian order correction in the EOB radial potential and determining the (constant) coefficient by imposing high-matching performances with numerical waveforms of mass ratios m1/m2 = 1,2/3,1/2 and = 1/4, m1 and m2 being the individual black-hole masses. The final black-hole mass and spin predicted by the numerical simulations are used to determine the ringdown frequency and decay time of three quasi-normal-mode damped sinusoids that are attached to the EOB inspiral-(plunge) waveform at the light-ring. The accurate EOB waveforms may be employed for coherent searches of gravitational waves emitted by non-spinning coalescing binary black holes with ground-based laser-interferometer detectors.

  14. Dynamical Approach Study of Spurious Numerics in Nonlinear Computations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Mansour, Nagi (Technical Monitor)

    2002-01-01

    The last two decades have been an era when computation is ahead of analysis and when very large scale practical computations are increasingly used in poorly understood multiscale complex nonlinear physical problems and non-traditional fields. Ensuring a higher level of confidence in the predictability and reliability (PAR) of these numerical simulations could play a major role in furthering the design, understanding, affordability and safety of our next generation air and space transportation systems, and systems for planetary and atmospheric sciences, and in understanding the evolution and origin of life. The need to guarantee PAR becomes acute when computations offer the ONLY way of solving these types of data limited problems. Employing theory from nonlinear dynamical systems, some building blocks to ensure a higher level of confidence in PAR of numerical simulations have been revealed by the author and world expert collaborators in relevant fields. Five building blocks with supporting numerical examples were discussed. The next step is to utilize knowledge gained by including nonlinear dynamics, bifurcation and chaos theories as an integral part of the numerical process. The third step is to design integrated criteria for reliable and accurate algorithms that cater to the different multiscale nonlinear physics. This includes but is not limited to the construction of appropriate adaptive spatial and temporal discretizations that are suitable for the underlying governing equations. In addition, a multiresolution wavelets approach for adaptive numerical dissipation/filter controls for high speed turbulence, acoustics and combustion simulations will be sought. These steps are corner stones for guarding against spurious numerical solutions that are solutions of the discretized counterparts but are not solutions of the underlying governing equations.

  15. A new approach for highly accurate, remote temperature probing using magnetic nanoparticles.

    PubMed

    Zhong, Jing; Liu, Wenzhong; Kong, Li; Morais, Paulo Cesar

    2014-01-01

    In this study, we report on a new approach for remote temperature probing that provides accuracy as good as 0.017°C (0.0055% accuracy) by measuring the magnetisation curve of magnetic nanoparticles. We included here the theoretical model construction and the inverse calculation method, and explored the impact caused by the temperature dependence of the saturation magnetisation and the applied magnetic field range. The reported results are of great significance in the establishment of safer protocols for the hyperthermia therapy and for the thermal assisted drug delivery technology. Likewise, our approach potentially impacts basic science as it provides a robust thermodynamic tool for noninvasive investigation of cell metabolism. PMID:25315470

  16. Ring polymer molecular dynamics fast computation of rate coefficients on accurate potential energy surfaces in local configuration space: Application to the abstraction of hydrogen from methane

    NASA Astrophysics Data System (ADS)

    Meng, Qingyong; Chen, Jun; Zhang, Dong H.

    2016-04-01

    To fast and accurately compute rate coefficients of the H/D + CH4 → H2/HD + CH3 reactions, we propose a segmented strategy for fitting suitable potential energy surface (PES), on which ring-polymer molecular dynamics (RPMD) simulations are performed. On the basis of recently developed permutation invariant polynomial neural-network approach [J. Li et al., J. Chem. Phys. 142, 204302 (2015)], PESs in local configuration spaces are constructed. In this strategy, global PES is divided into three parts, including asymptotic, intermediate, and interaction parts, along the reaction coordinate. Since less fitting parameters are involved in the local PESs, the computational efficiency for operating the PES routine is largely enhanced by a factor of ˜20, comparing with that for global PES. On interaction part, the RPMD computational time for the transmission coefficient can be further efficiently reduced by cutting off the redundant part of the child trajectories. For H + CH4, good agreements among the present RPMD rates and those from previous simulations as well as experimental results are found. For D + CH4, on the other hand, qualitative agreement between present RPMD and experimental results is predicted.

  17. Identification of fidgety movements and prediction of CP by the use of computer-based video analysis is more accurate when based on two video recordings.

    PubMed

    Adde, Lars; Helbostad, Jorunn; Jensenius, Alexander R; Langaas, Mette; Støen, Ragnhild

    2013-08-01

    This study evaluates the role of postterm age at assessment and the use of one or two video recordings for the detection of fidgety movements (FMs) and prediction of cerebral palsy (CP) using computer vision software. Recordings between 9 and 17 weeks postterm age from 52 preterm and term infants (24 boys, 28 girls; 26 born preterm) were used. Recordings were analyzed using computer vision software. Movement variables, derived from differences between subsequent video frames, were used for quantitative analysis. Sensitivities, specificities, and area under curve were estimated for the first and second recording, or a mean of both. FMs were classified based on the Prechtl approach of general movement assessment. CP status was reported at 2 years. Nine children developed CP of whom all recordings had absent FMs. The mean variability of the centroid of motion (CSD) from two recordings was more accurate than using only one recording, and identified all children who were diagnosed with CP at 2 years. Age at assessment did not influence the detection of FMs or prediction of CP. The accuracy of computer vision techniques in identifying FMs and predicting CP based on two recordings should be confirmed in future studies. PMID:23343036

  18. Alternative Computational Approaches for Probalistic Fatigue Analysis

    NASA Technical Reports Server (NTRS)

    Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Moore, N. R.; Grigoriu, M.

    1995-01-01

    The feasibility is discussed for alternative methods of direct Monte Carlo simulation for failure probability computations. First and second order reliability methods are used for fatigue crack growth and low cycle fatigue structural failure modes to illustrate typical problems.

  19. Accurate multi-source forest species mapping using the multiple spectral-spatial classification approach

    NASA Astrophysics Data System (ADS)

    Stavrakoudis, Dimitris; Gitas, Ioannis; Karydas, Christos; Kolokoussis, Polychronis; Karathanassi, Vassilia

    2015-10-01

    This paper proposes an efficient methodology for combining multiple remotely sensed imagery, in order to increase the classification accuracy in complex forest species mapping tasks. The proposed scheme follows a decision fusion approach, whereby each image is first classified separately by means of a pixel-wise Fuzzy-Output Support Vector Machine (FO-SVM) classifier. Subsequently, the multiple results are fused according to the so-called multiple spectral- spatial classifier using the minimum spanning forest (MSSC-MSF) approach, which constitutes an effective post-regularization procedure for enhancing the result of a single pixel-based classification. For this purpose, the original MSSC-MSF has been extended in order to handle multiple classifications. In particular, the fuzzy outputs of the pixel-based classifiers are stacked and used to grow the MSF, whereas the markers are also determined considering both classifications. The proposed methodology has been tested on a challenging forest species mapping task in northern Greece, considering a multispectral (GeoEye) and a hyper-spectral (CASI) image. The pixel-wise classifications resulted in overall accuracies (OA) of 68.71% for the GeoEye and 77.95% for the CASI images, respectively. Both of them are characterized by high levels of speckle noise. Applying the proposed multi-source MSSC-MSF fusion, the OA climbs to 90.86%, which is attributed both to the ability of MSSC-MSF to tackle the salt-and-pepper effect, as well as the fact that the fusion approach exploits the relative advantages of both information sources.

  20. A Proposed Frequency Synthesis Approach to Accurately Measure the Angular Position of a Spacecraft

    NASA Technical Reports Server (NTRS)

    Bagri, D. S.

    2005-01-01

    This article describes an approach for measuring the angular position of a spacecraft with reference to a nearby calibration source (quasar) with an accuracy of a few tenths of a nanoradian using a very long baseline interferometer of two antennas that measures the interferometer phase with a modest accuracy. It employs (1) radio frequency phase to determine the spacecraft position with high precision and (2) multiple delay measurements using either frequency tones or telemetry signals at different frequency spacings to resolve ambiguity of the location of the fringe (cycle) containing the direction of the spacecraft.

  1. Computational dynamics for robotics systems using a non-strict computational approach

    NASA Technical Reports Server (NTRS)

    Orin, David E.; Wong, Ho-Cheung; Sadayappan, P.

    1989-01-01

    A Non-Strict computational approach for real-time robotics control computations is proposed. In contrast to the traditional approach to scheduling such computations, based strictly on task dependence relations, the proposed approach relaxes precedence constraints and scheduling is guided instead by the relative sensitivity of the outputs with respect to the various paths in the task graph. An example of the computation of the Inverse Dynamics of a simple inverted pendulum is used to demonstrate the reduction in effective computational latency through use of the Non-Strict approach. A speedup of 5 has been obtained when the processes of the task graph are scheduled to reduce the latency along the crucial path of the computation. While error is introduced by the relaxation of precedence constraints, the Non-Strict approach has a smaller error than the conventional Strict approach for a wide range of input conditions.

  2. Machine learning and synthetic aperture refocusing approach for more accurate masking of fish bodies in 3D PIV data

    NASA Astrophysics Data System (ADS)

    Ford, Logan; Bajpayee, Abhishek; Techet, Alexandra

    2015-11-01

    3D particle image velocimetry (PIV) is becoming a popular technique to study biological flows. PIV images that contain fish or other animals around which flow is being studied, need to be appropriately masked in order to remove the animal body from the 3D reconstructed volumes prior to calculating particle displacement vectors. Presented here is a machine learning and synthetic aperture (SA) refocusing based approach for more accurate masking of fish from reconstructed intensity fields for 3D PIV purposes. Using prior knowledge about the 3D shape and appearance of the fish along with SA refocused images at arbitrarily oriented focal planes, the location and orientation of a fish in a reconstructed volume can be accurately determined. Once the location and orientation of a fish in a volume is determined, it can be masked out.

  3. Efficient and accurate approach to modeling the microstructure and defect properties of LaCoO3

    NASA Astrophysics Data System (ADS)

    Buckeridge, J.; Taylor, F. H.; Catlow, C. R. A.

    2016-04-01

    Complex perovskite oxides are promising materials for cathode layers in solid oxide fuel cells. Such materials have intricate electronic, magnetic, and crystalline structures that prove challenging to model accurately. We analyze a wide range of standard density functional theory approaches to modeling a highly promising system, the perovskite LaCoO3, focusing on optimizing the Hubbard U parameter to treat the self-interaction of the B-site cation's d states, in order to determine the most appropriate method to study defect formation and the effect of spin on local structure. By calculating structural and electronic properties for different magnetic states we determine that U =4 eV for Co in LaCoO3 agrees best with available experiments. We demonstrate that the generalized gradient approximation (PBEsol +U ) is most appropriate for studying structure versus spin state, while the local density approximation (LDA +U ) is most appropriate for determining accurate energetics for defect properties.

  4. A false sense of security? Can tiered approach be trusted to accurately classify immunogenicity samples?

    PubMed

    Jaki, Thomas; Allacher, Peter; Horling, Frank

    2016-09-01

    Detecting and characterizing of anti-drug antibodies (ADA) against a protein therapeutic are crucially important to monitor the unwanted immune response. Usually a multi-tiered approach that initially rapidly screens for positive samples that are subsequently confirmed in a separate assay is employed for testing of patient samples for ADA activity. In this manuscript we evaluate the ability of different methods used to classify subject with screening and competition based confirmatory assays. We find that for the overall performance of the multi-stage process the method used for confirmation is most important where a t-test is best when differences are moderate to large. Moreover we find that, when differences between positive and negative samples are not sufficiently large, using a competition based confirmation step does yield poor classification of positive samples. PMID:27262992

  5. Human brain mapping: Experimental and computational approaches

    SciTech Connect

    Wood, C.C.; George, J.S.; Schmidt, D.M.; Aine, C.J.; Sanders, J.; Belliveau, J.

    1998-11-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This program developed project combined Los Alamos' and collaborators' strengths in noninvasive brain imaging and high performance computing to develop potential contributions to the multi-agency Human Brain Project led by the National Institute of Mental Health. The experimental component of the project emphasized the optimization of spatial and temporal resolution of functional brain imaging by combining: (a) structural MRI measurements of brain anatomy; (b) functional MRI measurements of blood flow and oxygenation; and (c) MEG measurements of time-resolved neuronal population currents. The computational component of the project emphasized development of a high-resolution 3-D volumetric model of the brain based on anatomical MRI, in which structural and functional information from multiple imaging modalities can be integrated into a single computational framework for modeling, visualization, and database representation.

  6. An Approach to Developing Computer Catalogs

    ERIC Educational Resources Information Center

    MacDonald, Robin W.; Elrod, J. McRee

    1973-01-01

    A method of developing computer catalogs is proposed which does not require unit card conversion but rather the accumulation of data from operating programs. It is proposed that the bibliographic and finding functions of the catalog be separated, with the latter being the first automated. (8 references) (Author)

  7. Computed tomography in trauma: An atlas approach

    SciTech Connect

    Toombs, B.D.; Sandler, C.

    1986-01-01

    This book discussed computed tomography in trauma. The text is organized according to mechanism of injury and site of injury. In addition to CT, some correlation with other imaging modalities is included. Blunt trauma, penetrating trauma, complications and sequelae of trauma, and use of other modalities are covered.

  8. Designing Your Computer Curriculum: A Process Approach.

    ERIC Educational Resources Information Center

    Wepner, Shelley; Kramer, Steven

    Four essential steps for integrating computer technology into a school districts' reading curriculum--needs assessment, planning, implementation, and evaluation--are described in terms of what educators can do at the district and building level to facilitate optimal instructional conditions for students. With regard to needs assessment,…

  9. A novel approach for accurate identification of splice junctions based on hybrid algorithms.

    PubMed

    Mandal, Indrajit

    2015-01-01

    The precise prediction of splice junctions as 'exon-intron' or 'intron-exon' boundaries in a given DNA sequence is an important task in Bioinformatics. The main challenge is to determine the splice sites in the coding region. Due to the intrinsic complexity and the uncertainty in gene sequence, the adoption of data mining methods is increasingly becoming popular. There are various methods developed on different strategies in this direction. This article focuses on the construction of new hybrid machine learning ensembles that solve the splice junction task more effectively. A novel supervised feature reduction technique is developed using entropy-based fuzzy rough set theory optimized by greedy hill-climbing algorithm. The average prediction accuracy achieved is above 98% with 95% confidence interval. The performance of the proposed methods is evaluated using various metrics to establish the statistical significance of the results. The experiments are conducted using various schemes with human DNA sequence data. The obtained results are highly promising as compared with the state-of-the-art approaches in literature. PMID:25203504

  10. Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data.

    PubMed

    Pagán, Josué; De Orbe, M Irene; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L; Mora, J Vivancos; Moya, José M; Ayala, José L

    2015-01-01

    Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103

  11. PSI: a comprehensive and integrative approach for accurate plant subcellular localization prediction.

    PubMed

    Liu, Lili; Zhang, Zijun; Mei, Qian; Chen, Ming

    2013-01-01

    Predicting the subcellular localization of proteins conquers the major drawbacks of high-throughput localization experiments that are costly and time-consuming. However, current subcellular localization predictors are limited in scope and accuracy. In particular, most predictors perform well on certain locations or with certain data sets while poorly on others. Here, we present PSI, a novel high accuracy web server for plant subcellular localization prediction. PSI derives the wisdom of multiple specialized predictors via a joint-approach of group decision making strategy and machine learning methods to give an integrated best result. The overall accuracy obtained (up to 93.4%) was higher than best individual (CELLO) by ~10.7%. The precision of each predicable subcellular location (more than 80%) far exceeds that of the individual predictors. It can also deal with multi-localization proteins. PSI is expected to be a powerful tool in protein location engineering as well as in plant sciences, while the strategy employed could be applied to other integrative problems. A user-friendly web server, PSI, has been developed for free access at http://bis.zju.edu.cn/psi/. PMID:24194827

  12. A binding free energy decomposition approach for accurate calculations of the fidelity of DNA polymerases

    PubMed Central

    Rucker, Robert; Oelschlaeger, Peter; Warshel, Arieh

    2010-01-01

    DNA polymerase β (pol β) is a small eukaryotic enzyme with the ability to repair short single-stranded DNA gaps that has found use as a model system for larger replicative DNA polymerases. For all DNA polymerases, the factors determining their catalytic power and fidelity are the interactions between the bases of the base pair, amino acids near the active site, and the two magnesium ions. In this report, we study effects of all three aspects on human pol β transition state (TS) binding free energies by reproducing a consistent set of experimentally determined data for different structures. Our calculations comprise the combination of four different base pairs (incoming pyrimidine nucleotides incorporated opposite both matched and mismatched purines) with four different pol β structures (wild type and three separate mutations of ionized residues to alanine). We decompose the incoming deoxynucleoside 5′-triphosphate-TS, and run separate calculations for the neutral base part and the highly charged triphosphate part, using different dielectric constants in order to account for the specific electrostatic environments. This new approach improves our ability to predict the effect of matched and mismatched base pairing and of mutations in DNA polymerases on fidelity and may be a useful tool in studying the potential of DNA polymerase mutations in the development of cancer. It also supports our point of view with regards to the origin of the structural control of fidelity, allowing for a quantified description of the fidelity of DNA polymerases. PMID:19842163

  13. LOCUSTRA: accurate prediction of local protein structure using a two-layer support vector machine approach.

    PubMed

    Zimmermann, Olav; Hansmann, Ulrich H E

    2008-09-01

    Constraint generation for 3d structure prediction and structure-based database searches benefit from fine-grained prediction of local structure. In this work, we present LOCUSTRA, a novel scheme for the multiclass prediction of local structure that uses two layers of support vector machines (SVM). Using a 16-letter structural alphabet from de Brevern et al. (Proteins: Struct., Funct., Bioinf. 2000, 41, 271-287), we assess its prediction ability for an independent test set of 222 proteins and compare our method to three-class secondary structure prediction and direct prediction of dihedral angles. The prediction accuracy is Q16=61.0% for the 16 classes of the structural alphabet and Q3=79.2% for a simple mapping to the three secondary classes helix, sheet, and coil. We achieve a mean phi(psi) error of 24.74 degrees (38.35 degrees) and a median RMSDA (root-mean-square deviation of the (dihedral) angles) per protein chain of 52.1 degrees. These results compare favorably with related approaches. The LOCUSTRA web server is freely available to researchers at http://www.fz-juelich.de/nic/cbb/service/service.php. PMID:18763837

  14. Neural network approach to quantum-chemistry data: Accurate prediction of density functional theory energies

    NASA Astrophysics Data System (ADS)

    Balabin, Roman M.; Lomakina, Ekaterina I.

    2009-08-01

    Artificial neural network (ANN) approach has been applied to estimate the density functional theory (DFT) energy with large basis set using lower-level energy values and molecular descriptors. A total of 208 different molecules were used for the ANN training, cross validation, and testing by applying BLYP, B3LYP, and BMK density functionals. Hartree-Fock results were reported for comparison. Furthermore, constitutional molecular descriptor (CD) and quantum-chemical molecular descriptor (QD) were used for building the calibration model. The neural network structure optimization, leading to four to five hidden neurons, was also carried out. The usage of several low-level energy values was found to greatly reduce the prediction error. An expected error, mean absolute deviation, for ANN approximation to DFT energies was 0.6±0.2 kcal mol-1. In addition, the comparison of the different density functionals with the basis sets and the comparison of multiple linear regression results were also provided. The CDs were found to overcome limitation of the QD. Furthermore, the effective ANN model for DFT/6-311G(3df,3pd) and DFT/6-311G(2df,2pd) energy estimation was developed, and the benchmark results were provided.

  15. Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data

    PubMed Central

    Pagán, Josué; Irene De Orbe, M.; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L.; Vivancos Mora, J.; Moya, José M.; Ayala, José L.

    2015-01-01

    Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103

  16. High order accurate and low dissipation method for unsteady compressible viscous flow computation on helicopter rotor in forward flight

    NASA Astrophysics Data System (ADS)

    Xu, Li; Weng, Peifen

    2014-02-01

    An improved fifth-order weighted essentially non-oscillatory (WENO-Z) scheme combined with the moving overset grid technique has been developed to compute unsteady compressible viscous flows on the helicopter rotor in forward flight. In order to enforce periodic rotation and pitching of the rotor and relative motion between rotor blades, the moving overset grid technique is extended, where a special judgement standard is presented near the odd surface of the blade grid during search donor cells by using the Inverse Map method. The WENO-Z scheme is adopted for reconstructing left and right state values with the Roe Riemann solver updating the inviscid fluxes and compared with the monotone upwind scheme for scalar conservation laws (MUSCL) and the classical WENO scheme. Since the WENO schemes require a six point stencil to build the fifth-order flux, the method of three layers of fringes for hole boundaries and artificial external boundaries is proposed to carry out flow information exchange between chimera grids. The time advance on the unsteady solution is performed by the full implicit dual time stepping method with Newton type LU-SGS subiteration, where the solutions of pseudo steady computation are as the initial fields of the unsteady flow computation. Numerical results on non-variable pitch rotor and periodic variable pitch rotor in forward flight reveal that the approach can effectively capture vortex wake with low dissipation and reach periodic solutions very soon.

  17. A simple, efficient, and high-order accurate curved sliding-mesh interface approach to spectral difference method on coupled rotating and stationary domains

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Liang, Chunlei

    2015-08-01

    This paper presents a simple, efficient, and high-order accurate sliding-mesh interface approach to the spectral difference (SD) method. We demonstrate the approach by solving the two-dimensional compressible Navier-Stokes equations on quadrilateral grids. This approach is an extension of the straight mortar method originally designed for stationary domains [7,8]. Our sliding method creates curved dynamic mortars on sliding-mesh interfaces to couple rotating and stationary domains. On the nonconforming sliding-mesh interfaces, the related variables are first projected from cell faces to mortars to compute common fluxes, and then the common fluxes are projected back from the mortars to the cell faces to ensure conservation. To verify the spatial order of accuracy of the sliding-mesh spectral difference (SSD) method, both inviscid and viscous flow cases are tested. It is shown that the SSD method preserves the high-order accuracy of the SD method. Meanwhile, the SSD method is found to be very efficient in terms of computational cost. This novel sliding-mesh interface method is very suitable for parallel processing with domain decomposition. It can be applied to a wide range of problems, such as the hydrodynamics of marine propellers, the aerodynamics of rotorcraft, wind turbines, and oscillating wing power generators, etc.

  18. A declarative approach to visualizing concurrent computations

    SciTech Connect

    Roman, G.C.; Cox, K.C. )

    1989-10-01

    That visualization can play a key role in the exploration of concurrent computations is central to the ideas presented. Equally important, although given less emphasis, is concern that the full potential of visualization may not be reached unless the art of generating beautiful pictures is rooted in a solid, formally technical foundation. The authors show that program verification provides a formal framework around which such a foundation can be built. Making these ideas a practical reality will require both research and experimentation.

  19. A fourth-order accurate curvature computation in a level set framework for two-phase flows subjected to surface tension forces

    NASA Astrophysics Data System (ADS)

    Coquerelle, Mathieu; Glockner, Stéphane

    2016-01-01

    We propose an accurate and robust fourth-order curvature extension algorithm in a level set framework for the transport of the interface. The method is based on the Continuum Surface Force approach, and is shown to efficiently calculate surface tension forces for two-phase flows. In this framework, the accuracy of the algorithms mostly relies on the precise computation of the surface curvature which we propose to accomplish using a two-step algorithm: first by computing a reliable fourth-order curvature estimation from the level set function, and second by extending this curvature rigorously in the vicinity of the surface, following the Closest Point principle. The algorithm is easy to implement and to integrate into existing solvers, and can easily be extended to 3D. We propose a detailed analysis of the geometrical and numerical criteria responsible for the appearance of spurious currents, a well known phenomenon observed in various numerical frameworks. We study the effectiveness of this novel numerical method on state-of-the-art test cases showing that the resulting curvature estimate significantly reduces parasitic currents. In addition, the proposed approach converges to fourth-order regarding spatial discretization, which is two orders of magnitude better than algorithms currently available. We also show the necessity for high-order transport methods for the surface by studying the case of the 2D advection of a column at equilibrium thereby proving the robustness of the proposed approach. The algorithm is further validated on more complex test cases such as a rising bubble.

  20. An efficient and accurate technique to compute the absorption, emission, and transmission of radiation by the Martian atmosphere

    NASA Technical Reports Server (NTRS)

    Lindner, Bernhard Lee; Ackerman, Thomas P.; Pollack, James B.

    1990-01-01

    CO2 comprises 95 pct. of the composition of the Martian atmosphere. However, the Martian atmosphere also has a high aerosol content. Dust particles vary from less than 0.2 to greater than 3.0. CO2 is an active absorber and emitter in near IR and IR wavelengths; the near IR absorption bands of CO2 provide significant heating of the atmosphere, and the 15 micron band provides rapid cooling. Including both CO2 and aerosol radiative transfer simultaneously in a model is difficult. Aerosol radiative transfer requires a multiple scattering code, while CO2 radiative transfer must deal with complex wavelength structure. As an alternative to the pure atmosphere treatment in most models which causes inaccuracies, a treatment was developed called the exponential sum or k distribution approximation. The chief advantage of the exponential sum approach is that the integration over k space of f(k) can be computed more quickly than the integration of k sub upsilon over frequency. The exponential sum approach is superior to the photon path distribution and emissivity techniques for dusty conditions. This study was the first application of the exponential sum approach to Martian conditions.

  1. Fast and accurate resonance assignment of small-to-large proteins by combining automated and manual approaches.

    PubMed

    Niklasson, Markus; Ahlner, Alexandra; Andresen, Cecilia; Marsh, Joseph A; Lundström, Patrik

    2015-01-01

    The process of resonance assignment is fundamental to most NMR studies of protein structure and dynamics. Unfortunately, the manual assignment of residues is tedious and time-consuming, and can represent a significant bottleneck for further characterization. Furthermore, while automated approaches have been developed, they are often limited in their accuracy, particularly for larger proteins. Here, we address this by introducing the software COMPASS, which, by combining automated resonance assignment with manual intervention, is able to achieve accuracy approaching that from manual assignments at greatly accelerated speeds. Moreover, by including the option to compensate for isotope shift effects in deuterated proteins, COMPASS is far more accurate for larger proteins than existing automated methods. COMPASS is an open-source project licensed under GNU General Public License and is available for download from http://www.liu.se/forskning/foass/tidigare-foass/patrik-lundstrom/software?l=en. Source code and binaries for Linux, Mac OS X and Microsoft Windows are available. PMID:25569628

  2. Information theoretic approaches to multidimensional neural computations

    NASA Astrophysics Data System (ADS)

    Fitzgerald, Jeffrey D.

    Many systems in nature process information by transforming inputs from their environments into observable output states. These systems are often difficult to study because they are performing computations on multidimensional inputs with many degrees of freedom using highly nonlinear functions. The work presented in this dissertation deals with some of the issues involved with characterizing real-world input/output systems and understanding the properties of idealized systems using information theoretic methods. Using the principle of maximum entropy, a family of models are created that are consistent with certain measurable correlations from an input/output dataset but are maximally unbiased in all other respects, thereby eliminating all unjustified assumptions about the computation. In certain cases, including spiking neurons, we show that these models also minimize the mutual information. This property gives one the advantage of being able to identify the relevant input/output statistics by calculating their information content. We argue that these maximum entropy models provide a much needed quantitative framework for characterizing and understanding sensory processing neurons that are selective for multiple stimulus features. To demonstrate their usefulness, these ideas are applied to neural recordings from macaque retina and thalamus. These neurons, which primarily respond to two stimulus features, are shown to be well described using only first and second order statistics, indicating that their firing rates encode information about stimulus correlations. In addition to modeling multi-feature computations in the relevant feature space, we also show that maximum entropy models are capable of discovering the relevant feature space themselves. This technique overcomes the disadvantages of two commonly used dimensionality reduction methods and is explored using several simulated neurons, as well as retinal and thalamic recordings. Finally, we ask how neurons in a

  3. Acoustic gravity waves: A computational approach

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.; Dutt, P. K.

    1987-01-01

    This paper discusses numerical solutions of a hyperbolic initial boundary value problem that arises from acoustic wave propagation in the atmosphere. Field equations are derived from the atmospheric fluid flow governed by the Euler equations. The resulting original problem is nonlinear. A first order linearized version of the problem is used for computational purposes. The main difficulty in the problem as with any open boundary problem is in obtaining stable boundary conditions. Approximate boundary conditions are derived and shown to be stable. Numerical results are presented to verify the effectiveness of these boundary conditions.

  4. A Computational Approach to Competitive Range Expansions

    NASA Astrophysics Data System (ADS)

    Weber, Markus F.; Poxleitner, Gabriele; Hebisch, Elke; Frey, Erwin; Opitz, Madeleine

    2014-03-01

    Bacterial communities represent complex and dynamic ecological systems. Environmental conditions and microbial interactions determine whether a bacterial strain survives an expansion to new territory. In our work, we studied competitive range expansions in a model system of three Escherichia coli strains. In this system, a colicin producing strain competed with a colicin resistant, and with a colicin sensitive strain for new territory. Genetic engineering allowed us to tune the strains' growth rates and to study their expansion in distinct ecological scenarios (with either cyclic or hierarchical dominance). The control over growth rates also enabled us to construct and to validate a predictive computational model of the bacterial dynamics. The model rested on an agent-based, coarse-grained description of the expansion process and we conducted independent experiments on the growth of single-strain colonies for its parametrization. Furthermore, the model considered the long-range nature of the toxin interaction between strains. The integration of experimental analysis with computational modeling made it possible to quantify how the level of biodiversity depends on the interplay between bacterial growth rates, the initial composition of the inoculum, and the toxin range.

  5. A unified approach to computational drug discovery.

    PubMed

    Tseng, Chih-Yuan; Tuszynski, Jack

    2015-11-01

    It has been reported that a slowdown in the development of new medical therapies is affecting clinical outcomes. The FDA has thus initiated the Critical Path Initiative project investigating better approaches. We review the current strategies in drug discovery and focus on the advantages of the maximum entropy method being introduced in this area. The maximum entropy principle is derived from statistical thermodynamics and has been demonstrated to be an inductive inference tool. We propose a unified method to drug discovery that hinges on robust information processing using entropic inductive inference. Increasingly, applications of maximum entropy in drug discovery employ this unified approach and demonstrate the usefulness of the concept in the area of pharmaceutical sciences. PMID:26189935

  6. Unbiased QM/MM approach using accurate multipoles from a linear scaling DFT calculation with a systematic basis set

    NASA Astrophysics Data System (ADS)

    Mohr, Stephan; Genovese, Luigi; Ratcliff, Laura; Masella, Michel

    The quantum mechanics/molecular mechanis (QM/MM) method is a popular approach that allows to perform atomistic simulations using different levels of accuracy. Since only the essential part of the simulation domain is treated using a highly precise (but also expensive) QM method, whereas the remaining parts are handled using a less accurate level of theory, this approach allows to considerably extend the total system size that can be simulated without a notable loss of accuracy. In order to couple the QM and MM regions we use an approximation of the electrostatic potential based on a multipole expansion. The multipoles of the QM region are determined based on the results of a linear scaling Density Functional Theory (DFT) calculation using a set of adaptive, localized basis functions, as implemented within the BigDFT software package. As this determination comes at virtually no extra cost compared to the QM calculation, the coupling between QM and MM region can be done very efficiently. In this presentation I will demonstrate the accuracy of both the linear scaling DFT approach itself as well as of the approximation of the electrostatic potential based on the multipole expansion, and show some first QM/MM applications using the aforementioned approach.

  7. Computational approaches to natural product discovery

    PubMed Central

    Medema, Marnix H.; Fischbach, Michael A.

    2016-01-01

    From the earliest Streptomyces genome sequences, the promise of natural product genome mining has been captivating: genomics and bioinformatics would transform compound discovery from an ad hoc pursuit to a high-throughput endeavor. Until recently, however, genome mining has advanced natural product discovery only modestly. Here, we argue that the development of algorithms to mine the continuously increasing amounts of (meta)genomic data will enable the promise of genome mining to be realized. We review computational strategies that have been developed to identify biosynthetic gene clusters in genome sequences and predict the chemical structures of their products. We then discuss networking strategies that can systematize large volumes of genetic and chemical data, and connect genomic information to metabolomic and phenotypic data. Finally, we provide a vision of what natural product discovery might look like in the future, specifically considering long-standing questions in microbial ecology regarding the roles of metabolites in interspecies interactions. PMID:26284671

  8. Metabolomics and Diabetes: Analytical and Computational Approaches

    PubMed Central

    Sas, Kelli M.; Karnovsky, Alla; Michailidis, George

    2015-01-01

    Diabetes is characterized by altered metabolism of key molecules and regulatory pathways. The phenotypic expression of diabetes and associated complications encompasses complex interactions between genetic, environmental, and tissue-specific factors that require an integrated understanding of perturbations in the network of genes, proteins, and metabolites. Metabolomics attempts to systematically identify and quantitate small molecule metabolites from biological systems. The recent rapid development of a variety of analytical platforms based on mass spectrometry and nuclear magnetic resonance have enabled identification of complex metabolic phenotypes. Continued development of bioinformatics and analytical strategies has facilitated the discovery of causal links in understanding the pathophysiology of diabetes and its complications. Here, we summarize the metabolomics workflow, including analytical, statistical, and computational tools, highlight recent applications of metabolomics in diabetes research, and discuss the challenges in the field. PMID:25713200

  9. Computational Approaches for Understanding Energy Metabolism

    PubMed Central

    Shestov, Alexander A; Barker, Brandon; Gu, Zhenglong; Locasale, Jason W

    2013-01-01

    There has been a surge of interest in understanding the regulation of metabolic networks involved in disease in recent years. Quantitative models are increasingly being used to i nterrogate the metabolic pathways that are contained within this complex disease biology. At the core of this effort is the mathematical modeling of central carbon metabolism involving glycolysis and the citric acid cycle (referred to as energy metabolism). Here we discuss several approaches used to quantitatively model metabolic pathways relating to energy metabolism and discuss their formalisms, successes, and limitations. PMID:23897661

  10. Hyperdimensional computing approach to word sense disambiguation.

    PubMed

    Berster, Bjoern-Toby; Goodwin, J Caleb; Cohen, Trevor

    2012-01-01

    Coping with the ambiguous meanings of words has long been a hurdle for information retrieval and natural language processing systems. This paper presents a new word sense disambiguation approach using high-dimensional binary vectors, which encode meanings of words based on the different contexts in which they occur. In our approach, a randomly constructed vector is assigned to each ambiguous term, and another to each sense of this term. In the context of a sense-annotated training set, a reversible vector transformation is used to combine these vectors, such that both the term and the sense assigned to a context in which the term occurs are encoded into vectors representing the surrounding terms in this context. When a new context is encountered, the information required to disambiguate this term is extracted from the trained semantic vectors for the terms in this context by reversing the vector transformation to recover the correct sense of the term. On repeated experiments using ten-fold cross-validation and a standard test set, we obtained results comparable to the best obtained in previous studies. These results demonstrate the potential of our methodology, and suggest directions for future research. PMID:23304389

  11. Assessment of the extended Koopmans' theorem for the chemical reactivity: Accurate computations of chemical potentials, chemical hardnesses, and electrophilicity indices.

    PubMed

    Yildiz, Dilan; Bozkaya, Uğur

    2016-01-30

    The extended Koopmans' theorem (EKT) provides a straightforward way to compute ionization potentials and electron affinities from any level of theory. Although it is widely applied to ionization potentials, the EKT approach has not been applied to evaluation of the chemical reactivity. We present the first benchmarking study to investigate the performance of the EKT methods for predictions of chemical potentials (μ) (hence electronegativities), chemical hardnesses (η), and electrophilicity indices (ω). We assess the performance of the EKT approaches for post-Hartree-Fock methods, such as Møller-Plesset perturbation theory, the coupled-electron pair theory, and their orbital-optimized counterparts for the evaluation of the chemical reactivity. Especially, results of the orbital-optimized coupled-electron pair theory method (with the aug-cc-pVQZ basis set) for predictions of the chemical reactivity are very promising; the corresponding mean absolute errors are 0.16, 0.28, and 0.09 eV for μ, η, and ω, respectively. PMID:26458329

  12. Numerical Computation of Sensitivities and the Adjoint Approach

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael

    1997-01-01

    We discuss the numerical computation of sensitivities via the adjoint approach in optimization problems governed by differential equations. We focus on the adjoint problem in its weak form. We show how one can avoid some of the problems with the adjoint approach, such as deriving suitable boundary conditions for the adjoint equation. We discuss the convergence of numerical approximations of the costate computed via the weak form of the adjoint problem and show the significance for the discrete adjoint problem.

  13. Starting Computer Science Using C++ with Objects: A Workable Approach.

    ERIC Educational Resources Information Center

    Connolly, Mary V.

    Saint Mary's College (Indiana) offers a minor program in computer science. The program's introductory computer science class traditionally taught Pascal. The decision to change the introductory programming language to C++ with an object oriented approach was made when it became clear that there were good texts available for beginning students.…

  14. A Social Constructivist Approach to Computer-Mediated Instruction.

    ERIC Educational Resources Information Center

    Pear, Joseph J.; Crone-Todd, Darlene E.

    2002-01-01

    Describes a computer-mediated teaching system called computer-aided personalized system of instruction (CAPSI) that incorporates a social constructivist approach, maintaining that learning occurs primarily through a socially interactive process. Discusses use of CAPSI in an undergraduate course at the University of Manitoba that showed students…

  15. Computational fluid dynamics in ventilation: Practical approach

    NASA Astrophysics Data System (ADS)

    Fontaine, J. R.

    The potential of computation fluid dynamics (CFD) for conceiving ventilation systems is shown through the simulation of five practical cases. The following examples are considered: capture of pollutants on a surface treating tank equipped with a unilateral suction slot in the presence of a disturbing air draft opposed to suction; dispersion of solid aerosols inside fume cupboards; performances comparison of two general ventilation systems in a silkscreen printing workshop; ventilation of a large open painting area; and oil fog removal inside a mechanical engineering workshop. Whereas the two first problems are analyzed through two dimensional numerical simulations, the three other cases require three dimensional modeling. For the surface treating tank case, numerical results are compared to laboratory experiment data. All simulations are carried out using EOL, a CFD software specially devised to deal with air quality problems in industrial ventilated premises. It contains many analysis tools to interpret the results in terms familiar to the industrial hygienist. Much experimental work has been engaged to validate the predictions of EOL for ventilation flows.

  16. Multivariate analysis: A statistical approach for computations

    NASA Astrophysics Data System (ADS)

    Michu, Sachin; Kaushik, Vandana

    2014-10-01

    Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.

  17. Aluminium in Biological Environments: A Computational Approach

    PubMed Central

    Mujika, Jon I; Rezabal, Elixabete; Mercero, Jose M; Ruipérez, Fernando; Costa, Dominique; Ugalde, Jesus M; Lopez, Xabier

    2014-01-01

    The increased availability of aluminium in biological environments, due to human intervention in the last century, raises concerns on the effects that this so far “excluded from biology” metal might have on living organisms. Consequently, the bioinorganic chemistry of aluminium has emerged as a very active field of research. This review will focus on our contributions to this field, based on computational studies that can yield an understanding of the aluminum biochemistry at a molecular level. Aluminium can interact and be stabilized in biological environments by complexing with both low molecular mass chelants and high molecular mass peptides. The speciation of the metal is, nonetheless, dictated by the hydrolytic species dominant in each case and which vary according to the pH condition of the medium. In blood, citrate and serum transferrin are identified as the main low molecular mass and high molecular mass molecules interacting with aluminium. The complexation of aluminium to citrate and the subsequent changes exerted on the deprotonation pathways of its tritable groups will be discussed along with the mechanisms for the intake and release of aluminium in serum transferrin at two pH conditions, physiological neutral and endosomatic acidic. Aluminium can substitute other metals, in particular magnesium, in protein buried sites and trigger conformational disorder and alteration of the protonation states of the protein's sidechains. A detailed account of the interaction of aluminium with proteic sidechains will be given. Finally, it will be described how alumnium can exert oxidative stress by stabilizing superoxide radicals either as mononuclear aluminium or clustered in boehmite. The possibility of promotion of Fenton reaction, and production of hydroxyl radicals will also be discussed. PMID:24757505

  18. A new approach based on embedding Green's functions into fixed-point iterations for highly accurate solution to Troesch's problem

    NASA Astrophysics Data System (ADS)

    Kafri, H. Q.; Khuri, S. A.; Sayfy, A.

    2016-03-01

    In this paper, a novel approach is introduced for the solution of the non-linear Troesch's boundary value problem. The underlying strategy is based on Green's functions and fixed-point iterations, including Picard's and Krasnoselskii-Mann's schemes. The resulting numerical solutions are compared with both the analytical solutions and numerical solutions that exist in the literature. Convergence of the iterative schemes is proved via manipulation of the contraction principle. It is observed that the method handles the boundary layer very efficiently, reduces lengthy calculations, provides rapid convergence, and yields accurate results particularly for large eigenvalues. Indeed, to our knowledge, this is the first time that this problem is solved successfully for very large eigenvalues, actually the rate of convergence increases as the magnitude of the eigenvalues increases.

  19. Mutations that Cause Human Disease: A Computational/Experimental Approach

    SciTech Connect

    Beernink, P; Barsky, D; Pesavento, B

    2006-01-11

    International genome sequencing projects have produced billions of nucleotides (letters) of DNA sequence data, including the complete genome sequences of 74 organisms. These genome sequences have created many new scientific opportunities, including the ability to identify sequence variations among individuals within a species. These genetic differences, which are known as single nucleotide polymorphisms (SNPs), are particularly important in understanding the genetic basis for disease susceptibility. Since the report of the complete human genome sequence, over two million human SNPs have been identified, including a large-scale comparison of an entire chromosome from twenty individuals. Of the protein coding SNPs (cSNPs), approximately half leads to a single amino acid change in the encoded protein (non-synonymous coding SNPs). Most of these changes are functionally silent, while the remainder negatively impact the protein and sometimes cause human disease. To date, over 550 SNPs have been found to cause single locus (monogenic) diseases and many others have been associated with polygenic diseases. SNPs have been linked to specific human diseases, including late-onset Parkinson disease, autism, rheumatoid arthritis and cancer. The ability to predict accurately the effects of these SNPs on protein function would represent a major advance toward understanding these diseases. To date several attempts have been made toward predicting the effects of such mutations. The most successful of these is a computational approach called ''Sorting Intolerant From Tolerant'' (SIFT). This method uses sequence conservation among many similar proteins to predict which residues in a protein are functionally important. However, this method suffers from several limitations. First, a query sequence must have a sufficient number of relatives to infer sequence conservation. Second, this method does not make use of or provide any information on protein structure, which can be used to

  20. A scalable and accurate method for classifying protein-ligand binding geometries using a MapReduce approach.

    PubMed

    Estrada, T; Zhang, B; Cicotti, P; Armen, R S; Taufer, M

    2012-07-01

    We present a scalable and accurate method for classifying protein-ligand binding geometries in molecular docking. Our method is a three-step process: the first step encodes the geometry of a three-dimensional (3D) ligand conformation into a single 3D point in the space; the second step builds an octree by assigning an octant identifier to every single point in the space under consideration; and the third step performs an octree-based clustering on the reduced conformation space and identifies the most dense octant. We adapt our method for MapReduce and implement it in Hadoop. The load-balancing, fault-tolerance, and scalability in MapReduce allow screening of very large conformation spaces not approachable with traditional clustering methods. We analyze results for docking trials for 23 protein-ligand complexes for HIV protease, 21 protein-ligand complexes for Trypsin, and 12 protein-ligand complexes for P38alpha kinase. We also analyze cross docking trials for 24 ligands, each docking into 24 protein conformations of the HIV protease, and receptor ensemble docking trials for 24 ligands, each docking in a pool of HIV protease receptors. Our method demonstrates significant improvement over energy-only scoring for the accurate identification of native ligand geometries in all these docking assessments. The advantages of our clustering approach make it attractive for complex applications in real-world drug design efforts. We demonstrate that our method is particularly useful for clustering docking results using a minimal ensemble of representative protein conformational states (receptor ensemble docking), which is now a common strategy to address protein flexibility in molecular docking. PMID:22658682

  1. What is Intrinsic Motivation? A Typology of Computational Approaches

    PubMed Central

    Oudeyer, Pierre-Yves; Kaplan, Frederic

    2007-01-01

    Intrinsic motivation, centrally involved in spontaneous exploration and curiosity, is a crucial concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics. PMID:18958277

  2. A new approach for fault identification in computer networks

    NASA Astrophysics Data System (ADS)

    Zhao, Dong; Wang, Tao

    2004-04-01

    Effective management of computer networks has become a more and more difficult job because of the rapid development of the network systems. Fault identification is to find where is the problem of the network and what is it. Data mining generally refers to the process of extracting models from large stores of data. We can use data mining techniques to help us in the fault identification task. Existing approaches of fault identification are introduced and a new approach of fault identification is proposed. This approach improves MSDD algorithm but it need more computation. So some new techniques are used to increase the efficiency.

  3. Atmospheric transmittance of an absorbing gas. 4. OPTRAN: a computationally fast and accurate transmittance model for absorbing gases with fixed and with variable mixing ratios at variable viewing angles

    NASA Astrophysics Data System (ADS)

    McMillin, L. M.; Crone, L. J.; Goldberg, M. D.; Kleespies, T. J.

    1995-09-01

    A fast and accurate method for the generation of atmospheric transmittances, optical path transmittance (OPTRAN), is described. Results from OPTRAN are compared with those produced by other currently used methods. OPTRAN produces transmittances that can be used to generate brightness temperatures that are accurate to better than 0.2 K, well over 10 times as accurate as the current methods. This is significant because it brings the accuracy of transmittance computation to a level at which it will not adversely affect atmospheric retrievals. OPTRAN is the product of an evolution of approaches developed earlier at the National Environmental Satellite, Data, and Information Service. A majorfeature of OPTRAN that contributes to its accuracy is that transmittance is obtained as a function of the absorber amount rather than the pressure.

  4. Propagation of computer virus both across the Internet and external computers: A complex-network approach

    NASA Astrophysics Data System (ADS)

    Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi; Jin, Jian; He, Li

    2014-08-01

    Based on the assumption that external computers (particularly, infected external computers) are connected to the Internet, and by considering the influence of the Internet topology on computer virus spreading, this paper establishes a novel computer virus propagation model with a complex-network approach. This model possesses a unique (viral) equilibrium which is globally attractive. Some numerical simulations are also given to illustrate this result. Further study shows that the computers with higher node degrees are more susceptible to infection than those with lower node degrees. In this regard, some appropriate protective measures are suggested.

  5. An improved approach for accurate and efficient measurement of common carotid artery intima-media thickness in ultrasound images.

    PubMed

    Li, Qiang; Zhang, Wei; Guan, Xin; Bai, Yu; Jia, Jing

    2014-01-01

    The intima-media thickness (IMT) of common carotid artery (CCA) can serve as an important indicator for the assessment of cardiovascular diseases (CVDs). In this paper an improved approach for automatic IMT measurement with low complexity and high accuracy is presented. 100 ultrasound images from 100 patients were tested with the proposed approach. The ground truth (GT) of the IMT was manually measured for six times and averaged, while the automatic segmented (AS) IMT was computed by the algorithm proposed in this paper. The mean difference±standard deviation between AS and GT IMT is 0.0231±0.0348 mm, and the correlation coefficient between them is 0.9629. The computational time is 0.3223 s per image with MATLAB under Windows XP on an Intel Core 2 Duo CPU E7500 @2.93 GHz. The proposed algorithm has the potential to achieve real-time measurement under Visual Studio. PMID:25215292

  6. DisoMCS: Accurately Predicting Protein Intrinsically Disordered Regions Using a Multi-Class Conservative Score Approach

    PubMed Central

    Wang, Zhiheng; Yang, Qianqian; Li, Tonghua; Cong, Peisheng

    2015-01-01

    The precise prediction of protein intrinsically disordered regions, which play a crucial role in biological procedures, is a necessary prerequisite to further the understanding of the principles and mechanisms of protein function. Here, we propose a novel predictor, DisoMCS, which is a more accurate predictor of protein intrinsically disordered regions. The DisoMCS bases on an original multi-class conservative score (MCS) obtained by sequence-order/disorder alignment. Initially, near-disorder regions are defined on fragments located at both the terminus of an ordered region connecting a disordered region. Then the multi-class conservative score is generated by sequence alignment against a known structure database and represented as order, near-disorder and disorder conservative scores. The MCS of each amino acid has three elements: order, near-disorder and disorder profiles. Finally, the MCS is exploited as features to identify disordered regions in sequences. DisoMCS utilizes a non-redundant data set as the training set, MCS and predicted secondary structure as features, and a conditional random field as the classification algorithm. In predicted near-disorder regions a residue is determined as an order or a disorder according to the optimized decision threshold. DisoMCS was evaluated by cross-validation, large-scale prediction, independent tests and CASP (Critical Assessment of Techniques for Protein Structure Prediction) tests. All results confirmed that DisoMCS was very competitive in terms of accuracy of prediction when compared with well-established publicly available disordered region predictors. It also indicated our approach was more accurate when a query has higher homologous with the knowledge database. Availability The DisoMCS is available at http://cal.tongji.edu.cn/disorder/. PMID:26090958

  7. Towards Accurate Microscopic Calculation of Solvation Entropies: Extending the Restraint Release Approach to Studies of Solvation Effects

    PubMed Central

    Singh, Nidhi; Warshel, Arieh

    2009-01-01

    effectively captures the physics of these entropic effects. The success of the current approach indicates that it should be applicable to the studies of the solvation entropies in the proteins and also, in examining hydrophobic effects. Thus, we believe that the RR approach provides a powerful tool for evaluating the corresponding contributions to the binding entropies and eventually, to the binding free energies. This holds promise for extending the information theory modeling to proteins and protein-ligand complexes in aqueous solutions and consequently, facilitating computer-aided drug design. PMID:19402609

  8. Highly Accurate Infrared Line Lists of SO2 Isotopologues Computed for Atmospheric Modeling on Venus and Exoplanets

    NASA Astrophysics Data System (ADS)

    Huang, X.; Schwenke, D.; Lee, T. J.

    2014-12-01

    Last year we reported a semi-empirical 32S16O2 spectroscopic line list (denoted Ames-296K) for its atmospheric characterization in Venus and other Exoplanetary environments. In order to facilitate the Sulfur isotopic ratio and Sulfur chemistry model determination, now we present Ames-296K line lists for both 626 (upgraded) and other 4 symmetric isotopologues: 636, 646, 666 and 828. The line lists are computed on an ab initio potential energy surface refined with most reliable high resolution experimental data, using a high quality CCSD(T)/aug-cc-pV(Q+d)Z dipole moment surface. The most valuable part of our approach is to provide "truly reliable" predictions (and alternatives) for those unknown or hard-to-measure/analyze spectra. This strategy has guaranteed the lists are the best available alternative for those wide spectra region missing from spectroscopic databases such as HITRAN and GEISA, where only very limited data exist for 626/646 and no Infrared data at all for 636/666 or other minor isotopologues. Our general line position accuracy up to 5000 cm-1 is 0.01 - 0.02 cm-1 or better. Most transition intensity deviations are less than 5%, compare to experimentally measured quantities. Note that we have solved a convergence issue and further improved the quality and completeness of the main isotopologue 626 list at 296K. We will compare the lists to available models in CDMS/JPL/HITRAN and discuss the future mutually beneficial interactions between theoretical and experimental efforts.

  9. An approach to computing direction relations between separated object groups

    NASA Astrophysics Data System (ADS)

    Yan, H.; Wang, Z.; Li, J.

    2013-06-01

    Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on Gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups; and then it constructs the Voronoi Diagram between the two groups using the triangular network; after this, the normal of each Vornoi edge is calculated, and the quantitative expression of the direction relations is constructed; finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.

  10. An approach to computing direction relations between separated object groups

    NASA Astrophysics Data System (ADS)

    Yan, H.; Wang, Z.; Li, J.

    2013-09-01

    Direction relations between object groups play an important role in qualitative spatial reasoning, spatial computation and spatial recognition. However, none of existing models can be used to compute direction relations between object groups. To fill this gap, an approach to computing direction relations between separated object groups is proposed in this paper, which is theoretically based on gestalt principles and the idea of multi-directions. The approach firstly triangulates the two object groups, and then it constructs the Voronoi diagram between the two groups using the triangular network. After this, the normal of each Voronoi edge is calculated, and the quantitative expression of the direction relations is constructed. Finally, the quantitative direction relations are transformed into qualitative ones. The psychological experiments show that the proposed approach can obtain direction relations both between two single objects and between two object groups, and the results are correct from the point of view of spatial cognition.