Science.gov

Sample records for accurate structural models

  1. 5D model for accurate representation and visualization of dynamic cardiac structures

    NASA Astrophysics Data System (ADS)

    Lin, Wei-te; Robb, Richard A.

    2000-05-01

    Accurate cardiac modeling is challenging due to the intricate structure and complex contraction patterns of myocardial tissues. Fast imaging techniques can provide 4D structural information acquired as a sequence of 3D images throughout the cardiac cycle. To mode. The beating heart, we created a physics-based surface model that deforms between successive time point in the cardiac cycle. 3D images of canine hearts were acquired during one complete cardiac cycle using the DSR and the EBCT. The left ventricle of the first time point is reconstructed as a triangular mesh. A mass-spring physics-based deformable mode,, which can expand and shrink with local contraction and stretching forces distributed in an anatomically accurate simulation of cardiac motion, is applied to the initial mesh and allows the initial mesh to deform to fit the left ventricle in successive time increments of the sequence. The resulting 4D model can be interactively transformed and displayed with associated regional electrical activity mapped onto anatomic surfaces, producing a 5D model, which faithfully exhibits regional cardiac contraction and relaxation patterns over the entire heart. The model faithfully represents structural changes throughout the cardiac cycle. Such models provide the framework for minimizing the number of time points required to usefully depict regional motion of myocardium and allow quantitative assessment of regional myocardial motion. The electrical activation mapping provides spatial and temporal correlation within the cardiac cycle. In procedures which as intra-cardiac catheter ablation, visualization of the dynamic model can be used to accurately localize the foci of myocardial arrhythmias and guide positioning of catheters for optimal ablation.

  2. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.

    PubMed

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-02-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/.

  3. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations

    PubMed Central

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-01-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-‘one-click’ experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674

  4. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.

    PubMed

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-02-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674

  5. Accurate calculation of control-augmented structural eigenvalue sensitivities using reduced-order models

    NASA Technical Reports Server (NTRS)

    Livne, Eli

    1989-01-01

    A method is presented for generating mode shapes for model order reduction in a way that leads to accurate calculation of eigenvalue derivatives and eigenvalues for a class of control augmented structures. The method is based on treating degrees of freedom where control forces act or masses are changed in a manner analogous to that used for boundary degrees of freedom in component mode synthesis. It is especially suited for structures controlled by a small number of actuators and/or tuned by a small number of concentrated masses whose positions are predetermined. A control augmented multispan beam with closely spaced natural frequencies is used for numerical experimentation. A comparison with reduced-order eigenvalue sensitivity calculations based on the normal modes of the structure shows that the method presented produces significant improvements in accuracy.

  6. Beyond Ellipse(s): Accurately Modelling the Isophotal Structure of Galaxies with ISOFIT and CMODEL

    NASA Astrophysics Data System (ADS)

    Ciambur, B. C.

    2015-09-01

    This work introduces a new fitting formalism for isophotes that enables more accurate modeling of galaxies with non-elliptical shapes, such as disk galaxies viewed edge-on or galaxies with X-shaped/peanut bulges. Within this scheme, the angular parameter that defines quasi-elliptical isophotes is transformed from the commonly used, but inappropriate, polar coordinate to the “eccentric anomaly.” This provides a superior description of deviations from ellipticity, better capturing the true isophotal shape. Furthermore, this makes it possible to accurately recover both the surface brightness profile, using the correct azimuthally averaged isophote, and the two-dimensional model of any galaxy: the hitherto ubiquitous, but artificial, cross-like features in residual images are completely removed. The formalism has been implemented into the Image Reduction and Analysis Facility tasks Ellipse and Bmodel to create the new tasks “Isofit,” and “Cmodel.” The new tools are demonstrated here with application to five galaxies, chosen to be representative case-studies for several areas where this technique makes it possible to gain new scientific insight. Specifically: properly quantifying boxy/disky isophotes via the fourth harmonic order in edge-on galaxies, quantifying X-shaped/peanut bulges, higher-order Fourier moments for modeling bars in disks, and complex isophote shapes. Higher order (n > 4) harmonics now become meaningful and may correlate with structural properties, as boxyness/diskyness is known to do. This work also illustrates how the accurate construction, and subtraction, of a model from a galaxy image facilitates the identification and recovery of over-lapping sources such as globular clusters and the optical counterparts of X-ray sources.

  7. Accurate Fabrication of Hydroxyapatite Bone Models with Porous Scaffold Structures by Using Stereolithography

    NASA Astrophysics Data System (ADS)

    Maeda, Chiaki; Tasaki, Satoko; Kirihara, Soshu

    2011-05-01

    Computer graphic models of bioscaffolds with four-coordinate lattice structures of solid rods in artificial bones were designed by using a computer aided design. The scaffold models composed of acryl resin with hydroxyapatite particles at 45vol. % were fabricated by using stereolithography of a computer aided manufacturing. After dewaxing and sintering heat treatment processes, the ceramics scaffold models with four-coordinate lattices and fine hydroxyapatite microstructures were obtained successfully. By using a computer aided analysis, it was found that bio-fluids could flow extensively inside the sintered scaffolds. This result shows that the lattice structures will realize appropriate bio-fluid circulations and promote regenerations of new bones.

  8. Bottom-up coarse-grained models that accurately describe the structure, pressure, and compressibility of molecular liquids

    SciTech Connect

    Dunn, Nicholas J. H.; Noid, W. G.

    2015-12-28

    The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed “pressure-matching” variational principle to determine a volume-dependent contribution to the potential, U{sub V}(V), that approximates the volume-dependence of the PMF. We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing U{sub V}, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that U{sub V} accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the “simplicity” of the model.

  9. Bottom-up coarse-grained models that accurately describe the structure, pressure, and compressibility of molecular liquids

    NASA Astrophysics Data System (ADS)

    Dunn, Nicholas J. H.; Noid, W. G.

    2015-12-01

    The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed "pressure-matching" variational principle to determine a volume-dependent contribution to the potential, UV(V), that approximates the volume-dependence of the PMF. We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing UV, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that UV accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the "simplicity" of the model.

  10. Accurate flexible fitting of high-resolution protein structures to small-angle x-ray scattering data using a coarse-grained model with implicit hydration shell.

    PubMed

    Zheng, Wenjun; Tekpinar, Mustafa

    2011-12-21

    Small-angle x-ray scattering (SAXS) is a powerful technique widely used to explore conformational states and transitions of biomolecular assemblies in solution. For accurate model reconstruction from SAXS data, one promising approach is to flexibly fit a known high-resolution protein structure to low-resolution SAXS data by computer simulations. This is a highly challenging task due to low information content in SAXS data. To meet this challenge, we have developed what we believe to be a novel method based on a coarse-grained (one-bead-per-residue) protein representation and a modified form of the elastic network model that allows large-scale conformational changes while maintaining pseudobonds and secondary structures. Our method optimizes a pseudoenergy that combines the modified elastic-network model energy with a SAXS-fitting score and a collision energy that penalizes steric collisions. Our method uses what we consider a new implicit hydration shell model that accounts for the contribution of hydration shell to SAXS data accurately without explicitly adding waters to the system. We have rigorously validated our method using five test cases with simulated SAXS data and three test cases with experimental SAXS data. Our method has successfully generated high-quality structural models with root mean-squared deviation of 1 ∼ 3 Å from the target structures.

  11. Accurate SHAPE-directed RNA structure determination

    PubMed Central

    Deigan, Katherine E.; Li, Tian W.; Mathews, David H.; Weeks, Kevin M.

    2009-01-01

    Almost all RNAs can fold to form extensive base-paired secondary structures. Many of these structures then modulate numerous fundamental elements of gene expression. Deducing these structure–function relationships requires that it be possible to predict RNA secondary structures accurately. However, RNA secondary structure prediction for large RNAs, such that a single predicted structure for a single sequence reliably represents the correct structure, has remained an unsolved problem. Here, we demonstrate that quantitative, nucleotide-resolution information from a SHAPE experiment can be interpreted as a pseudo-free energy change term and used to determine RNA secondary structure with high accuracy. Free energy minimization, by using SHAPE pseudo-free energies, in conjunction with nearest neighbor parameters, predicts the secondary structure of deproteinized Escherichia coli 16S rRNA (>1,300 nt) and a set of smaller RNAs (75–155 nt) with accuracies of up to 96–100%, which are comparable to the best accuracies achievable by comparative sequence analysis. PMID:19109441

  12. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  13. Pre-Modeling Ensures Accurate Solid Models

    ERIC Educational Resources Information Center

    Gow, George

    2010-01-01

    Successful solid modeling requires a well-organized design tree. The design tree is a list of all the object's features and the sequential order in which they are modeled. The solid-modeling process is faster and less prone to modeling errors when the design tree is a simple and geometrically logical definition of the modeled object. Few high…

  14. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  15. Universality: Accurate Checks in Dyson's Hierarchical Model

    NASA Astrophysics Data System (ADS)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.

    2003-06-01

    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  16. The importance of accurate atmospheric modeling

    NASA Astrophysics Data System (ADS)

    Payne, Dylan; Schroeder, John; Liang, Pang

    2014-11-01

    This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.

  17. Methods for accurate homology modeling by global optimization.

    PubMed

    Joo, Keehyoung; Lee, Jinwoo; Lee, Jooyoung

    2012-01-01

    High accuracy protein modeling from its sequence information is an important step toward revealing the sequence-structure-function relationship of proteins and nowadays it becomes increasingly more useful for practical purposes such as in drug discovery and in protein design. We have developed a protocol for protein structure prediction that can generate highly accurate protein models in terms of backbone structure, side-chain orientation, hydrogen bonding, and binding sites of ligands. To obtain accurate protein models, we have combined a powerful global optimization method with traditional homology modeling procedures such as multiple sequence alignment, chain building, and side-chain remodeling. We have built a series of specific score functions for these steps, and optimized them by utilizing conformational space annealing, which is one of the most successful combinatorial optimization algorithms currently available.

  18. An articulated statistical shape model for accurate hip joint segmentation.

    PubMed

    Kainmueller, Dagmar; Lamecker, Hans; Zachow, Stefan; Hege, Hans-Christian

    2009-01-01

    In this paper we propose a framework for fully automatic, robust and accurate segmentation of the human pelvis and proximal femur in CT data. We propose a composite statistical shape model of femur and pelvis with a flexible hip joint, for which we extend the common definition of statistical shape models as well as the common strategy for their adaptation. We do not analyze the joint flexibility statistically, but model it explicitly by rotational parameters describing the bent in a ball-and-socket joint. A leave-one-out evaluation on 50 CT volumes shows that image driven adaptation of our composite shape model robustly produces accurate segmentations of both proximal femur and pelvis. As a second contribution, we evaluate a fine grain multi-object segmentation method based on graph optimization. It relies on accurate initializations of femur and pelvis, which our composite shape model can generate. Simultaneous optimization of both femur and pelvis yields more accurate results than separate optimizations of each structure. Shape model adaptation and graph based optimization are embedded in a fully automatic framework. PMID:19964159

  19. Accurate theoretical chemistry with coupled pair models.

    PubMed

    Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan

    2009-05-19

    Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now

  20. A quick accurate model of nozzle backflow

    NASA Technical Reports Server (NTRS)

    Kuharski, R. A.

    1991-01-01

    Backflow from nozzles is a major source of contamination on spacecraft. If the craft contains any exposed high voltages, the neutral density produced by the nozzles in the vicinity of the craft needs to be known in order to assess the possibility of Paschen breakdown or the probability of sheath ionization around a region of the craft that collects electrons for the plasma. A model for backflow has been developed for incorporation into the Environment-Power System Analysis Tool (EPSAT) which quickly estimates both the magnitude of the backflow and the species makeup of the flow. By combining the backflow model with the Simons (1972) model for continuum flow it is possible to quickly estimate the density of each species from a nozzle at any position in space. The model requires only a few physical parameters of the nozzle and the gas as inputs and is therefore ideal for engineering applications.

  1. Accurate Drawbead Modeling in Stamping Simulations

    NASA Astrophysics Data System (ADS)

    Sester, M.; Burchitz, I.; Saenz de Argandona, E.; Estalayo, F.; Carleer, B.

    2016-08-01

    An adaptive line bead model that continually updates according to the changing conditions during the forming process has been developed. In these calculations, the adaptive line bead's geometry is treated as a 3D object where relevant phenomena like hardening curve, yield surface, through thickness stress effects and contact description are incorporated. The effectiveness of the adaptive drawbead model will be illustrated by an industrial example.

  2. Accurate spectral modeling for infrared radiation

    NASA Technical Reports Server (NTRS)

    Tiwari, S. N.; Gupta, S. K.

    1977-01-01

    Direct line-by-line integration and quasi-random band model techniques are employed to calculate the spectral transmittance and total band absorptance of 4.7 micron CO, 4.3 micron CO2, 15 micron CO2, and 5.35 micron NO bands. Results are obtained for different pressures, temperatures, and path lengths. These are compared with available theoretical and experimental investigations. For each gas, extensive tabulations of results are presented for comparative purposes. In almost all cases, line-by-line results are found to be in excellent agreement with the experimental values. The range of validity of other models and correlations are discussed.

  3. Towards Accurate Molecular Modeling of Plastic Bonded Explosives

    NASA Astrophysics Data System (ADS)

    Chantawansri, T. L.; Andzelm, J.; Taylor, D.; Byrd, E.; Rice, B.

    2010-03-01

    There is substantial interest in identifying the controlling factors that influence the susceptibility of polymer bonded explosives (PBXs) to accidental initiation. Numerous Molecular Dynamics (MD) simulations of PBXs using the COMPASS force field have been reported in recent years, where the validity of the force field in modeling the solid EM fill has been judged solely on its ability to reproduce lattice parameters, which is an insufficient metric. Performance of the COMPASS force field in modeling EMs and the polymeric binder has been assessed by calculating structural, thermal, and mechanical properties, where only fair agreement with experimental data is obtained. We performed MD simulations using the COMPASS force field for the polymer binder hydroxyl-terminated polybutadiene and five EMs: cyclotrimethylenetrinitramine, 1,3,5,7-tetranitro-1,3,5,7-tetra-azacyclo-octane, 2,4,6,8,10,12-hexantirohexaazazisowurzitane, 2,4,6-trinitro-1,3,5-benzenetriamine, and pentaerythritol tetranitate. Predicted EM crystallographic and molecular structural parameters, as well as calculated properties for the binder will be compared with experimental results for different simulation conditions. We also present novel simulation protocols, which improve agreement between experimental and computation results thus leading to the accurate modeling of PBXs.

  4. Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.

    PubMed

    Wu, Tim; Hung, Alice; Mithraratne, Kumar

    2014-11-01

    This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data.

  5. Accurate Classification of RNA Structures Using Topological Fingerprints

    PubMed Central

    Li, Kejie; Gribskov, Michael

    2016-01-01

    While RNAs are well known to possess complex structures, functionally similar RNAs often have little sequence similarity. While the exact size and spacing of base-paired regions vary, functionally similar RNAs have pronounced similarity in the arrangement, or topology, of base-paired stems. Furthermore, predicted RNA structures often lack pseudoknots (a crucial aspect of biological activity), and are only partially correct, or incomplete. A topological approach addresses all of these difficulties. In this work we describe each RNA structure as a graph that can be converted to a topological spectrum (RNA fingerprint). The set of subgraphs in an RNA structure, its RNA fingerprint, can be compared with the fingerprints of other RNA structures to identify and correctly classify functionally related RNAs. Topologically similar RNAs can be identified even when a large fraction, up to 30%, of the stems are omitted, indicating that highly accurate structures are not necessary. We investigate the performance of the RNA fingerprint approach on a set of eight highly curated RNA families, with diverse sizes and functions, containing pseudoknots, and with little sequence similarity–an especially difficult test set. In spite of the difficult test set, the RNA fingerprint approach is very successful (ROC AUC > 0.95). Due to the inclusion of pseudoknots, the RNA fingerprint approach both covers a wider range of possible structures than methods based only on secondary structure, and its tolerance for incomplete structures suggests that it can be applied even to predicted structures. Source code is freely available at https://github.rcac.purdue.edu/mgribsko/XIOS_RNA_fingerprint. PMID:27755571

  6. Isomerism of Cyanomethanimine: Accurate Structural, Energetic, and Spectroscopic Characterization.

    PubMed

    Puzzarini, Cristina

    2015-11-25

    The structures, relative stabilities, and rotational and vibrational parameters of the Z-C-, E-C-, and N-cyanomethanimine isomers have been evaluated using state-of-the-art quantum-chemical approaches. Equilibrium geometries have been calculated by means of a composite scheme based on coupled-cluster calculations that accounts for the extrapolation to the complete basis set limit and core-correlation effects. The latter approach is proved to provide molecular structures with an accuracy of 0.001-0.002 Å and 0.05-0.1° for bond lengths and angles, respectively. Systematically extrapolated ab initio energies, accounting for electron correlation through coupled-cluster theory, including up to single, double, triple, and quadruple excitations, and corrected for core-electron correlation and anharmonic zero-point vibrational energy, have been used to accurately determine relative energies and the Z-E isomerization barrier with an accuracy of about 1 kJ/mol. Vibrational and rotational spectroscopic parameters have been investigated by means of hybrid schemes that allow us to obtain rotational constants accurate to about a few megahertz and vibrational frequencies with a mean absolute error of ∼1%. Where available, for all properties considered, a very good agreement with experimental data has been observed.

  7. SPECTROPOLARIMETRICALLY ACCURATE MAGNETOHYDROSTATIC SUNSPOT MODEL FOR FORWARD MODELING IN HELIOSEISMOLOGY

    SciTech Connect

    Przybylski, D.; Shelyag, S.; Cally, P. S.

    2015-07-01

    We present a technique to construct a spectropolarimetrically accurate magnetohydrostatic model of a large-scale solar magnetic field concentration, mimicking a sunspot. Using the constructed model we perform a simulation of acoustic wave propagation, conversion, and absorption in the solar interior and photosphere with the sunspot embedded into it. With the 6173 Å magnetically sensitive photospheric absorption line of neutral iron, we calculate observable quantities such as continuum intensities, Doppler velocities, as well as the full Stokes vector for the simulation at various positions at the solar disk, and analyze the influence of non-locality of radiative transport in the solar photosphere on helioseismic measurements. Bisector shapes were used to perform multi-height observations. The differences in acoustic power at different heights within the line formation region at different positions at the solar disk were simulated and characterized. An increase in acoustic power in the simulated observations of the sunspot umbra away from the solar disk center was confirmed as the slow magnetoacoustic wave.

  8. Towards accurate observation and modelling of Antarctic glacial isostatic adjustment

    NASA Astrophysics Data System (ADS)

    King, M.

    2012-04-01

    The response of the solid Earth to glacial mass changes, known as glacial isostatic adjustment (GIA), has received renewed attention in the recent decade thanks to the Gravity Recovery and Climate Experiment (GRACE) satellite mission. GRACE measures Earth's gravity field every 30 days, but cannot partition surface mass changes, such as present-day cryospheric or hydrological change, from changes within the solid Earth, notably due to GIA. If GIA cannot be accurately modelled in a particular region the accuracy of GRACE estimates of ice mass balance for that region is compromised. This lecture will focus on Antarctica, where models of GIA are hugely uncertain due to weak constraints on ice loading history and Earth structure. Over the last years, however, there has been a step-change in our ability to measure GIA uplift with the Global Positioning System (GPS), including widespread deployments of permanent GPS receivers as part of the International Polar Year (IPY) POLENET project. I will particularly focus on the Antarctic GPS velocity field and the confounding effect of elastic rebound due to present-day ice mass changes, and then describe the construction and calibration of a new Antarctic GIA model for application to GRACE data, as well as highlighting areas where further critical developments are required.

  9. Accurate, low-cost 3D-models of gullies

    NASA Astrophysics Data System (ADS)

    Onnen, Nils; Gronz, Oliver; Ries, Johannes B.; Brings, Christine

    2015-04-01

    Soil erosion is a widespread problem in arid and semi-arid areas. The most severe form is the gully erosion. They often cut into agricultural farmland and can make a certain area completely unproductive. To understand the development and processes inside and around gullies, we calculated detailed 3D-models of gullies in the Souss Valley in South Morocco. Near Taroudant, we had four study areas with five gullies different in size, volume and activity. By using a Canon HF G30 Camcorder, we made varying series of Full HD videos with 25fps. Afterwards, we used the method Structure from Motion (SfM) to create the models. To generate accurate models maintaining feasible runtimes, it is necessary to select around 1500-1700 images from the video, while the overlap of neighboring images should be at least 80%. In addition, it is very important to avoid selecting photos that are blurry or out of focus. Nearby pixels of a blurry image tend to have similar color values. That is why we used a MATLAB script to compare the derivatives of the images. The higher the sum of the derivative, the sharper an image of similar objects. MATLAB subdivides the video into image intervals. From each interval, the image with the highest sum is selected. E.g.: 20min. video at 25fps equals 30.000 single images. The program now inspects the first 20 images, saves the sharpest and moves on to the next 20 images etc. Using this algorithm, we selected 1500 images for our modeling. With VisualSFM, we calculated features and the matches between all images and produced a point cloud. Then, MeshLab has been used to build a surface out of it using the Poisson surface reconstruction approach. Afterwards we are able to calculate the size and the volume of the gullies. It is also possible to determine soil erosion rates, if we compare the data with old recordings. The final step would be the combination of the terrestrial data with the data from our aerial photography. So far, the method works well and we

  10. An accurate and simple quantum model for liquid water.

    PubMed

    Paesani, Francesco; Zhang, Wei; Case, David A; Cheatham, Thomas E; Voth, Gregory A

    2006-11-14

    The path-integral molecular dynamics and centroid molecular dynamics methods have been applied to investigate the behavior of liquid water at ambient conditions starting from a recently developed simple point charge/flexible (SPC/Fw) model. Several quantum structural, thermodynamic, and dynamical properties have been computed and compared to the corresponding classical values, as well as to the available experimental data. The path-integral molecular dynamics simulations show that the inclusion of quantum effects results in a less structured liquid with a reduced amount of hydrogen bonding in comparison to its classical analog. The nuclear quantization also leads to a smaller dielectric constant and a larger diffusion coefficient relative to the corresponding classical values. Collective and single molecule time correlation functions show a faster decay than their classical counterparts. Good agreement with the experimental measurements in the low-frequency region is obtained for the quantum infrared spectrum, which also shows a higher intensity and a redshift relative to its classical analog. A modification of the original parametrization of the SPC/Fw model is suggested and tested in order to construct an accurate quantum model, called q-SPC/Fw, for liquid water. The quantum results for several thermodynamic and dynamical properties computed with the new model are shown to be in a significantly better agreement with the experimental data. Finally, a force-matching approach was applied to the q-SPC/Fw model to derive an effective quantum force field for liquid water in which the effects due to the nuclear quantization are explicitly distinguished from those due to the underlying molecular interactions. Thermodynamic and dynamical properties computed using standard classical simulations with this effective quantum potential are found in excellent agreement with those obtained from significantly more computationally demanding full centroid molecular dynamics

  11. An Accurate and Dynamic Computer Graphics Muscle Model

    NASA Technical Reports Server (NTRS)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  12. A versatile phenomenological model for the S-shaped temperature dependence of photoluminescence energy for an accurate determination of the exciton localization energy in bulk and quantum well structures

    NASA Astrophysics Data System (ADS)

    Dixit, V. K.; Porwal, S.; Singh, S. D.; Sharma, T. K.; Ghosh, Sandip; Oak, S. M.

    2014-02-01

    Temperature dependence of the photoluminescence (PL) peak energy of bulk and quantum well (QW) structures is studied by using a new phenomenological model for including the effect of localized states. In general an anomalous S-shaped temperature dependence of the PL peak energy is observed for many materials which is usually associated with the localization of excitons in band-tail states that are formed due to potential fluctuations. Under such conditions, the conventional models of Varshni, Viña and Passler fail to replicate the S-shaped temperature dependence of the PL peak energy and provide inconsistent and unrealistic values of the fitting parameters. The proposed formalism persuasively reproduces the S-shaped temperature dependence of the PL peak energy and provides an accurate determination of the exciton localization energy in bulk and QW structures along with the appropriate values of material parameters. An example of a strained InAs0.38P0.62/InP QW is presented by performing detailed temperature and excitation intensity dependent PL measurements and subsequent in-depth analysis using the proposed model. Versatility of the new formalism is tested on a few other semiconductor materials, e.g. GaN, nanotextured GaN, AlGaN and InGaN, which are known to have a significant contribution from the localized states. A quantitative evaluation of the fractional contribution of the localized states is essential for understanding the temperature dependence of the PL peak energy of bulk and QW well structures having a large contribution of the band-tail states.

  13. An accurate, efficient algorithm for calculation of quantum transport in extended structures

    SciTech Connect

    Godin, T.J.; Haydock, R.

    1994-05-01

    In device structures with dimensions comparable to carrier inelastic scattering lengths, the quantum nature of carriers will cause interference effects that cannot be modeled by conventional techniques. The basic equations that govern these ``quantum`` circuit elements present significant numerical challenges. The authors describe the block recursion method, an accurate, efficient method for solving the quantum circuit problem. They demonstrate this method by modeling dirty inversion layers.

  14. More-Accurate Model of Flows in Rocket Injectors

    NASA Technical Reports Server (NTRS)

    Hosangadi, Ashvin; Chenoweth, James; Brinckman, Kevin; Dash, Sanford

    2011-01-01

    An improved computational model for simulating flows in liquid-propellant injectors in rocket engines has been developed. Models like this one are needed for predicting fluxes of heat in, and performances of, the engines. An important part of predicting performance is predicting fluctuations of temperature, fluctuations of concentrations of chemical species, and effects of turbulence on diffusion of heat and chemical species. Customarily, diffusion effects are represented by parameters known in the art as the Prandtl and Schmidt numbers. Prior formulations include ad hoc assumptions of constant values of these parameters, but these assumptions and, hence, the formulations, are inaccurate for complex flows. In the improved model, these parameters are neither constant nor specified in advance: instead, they are variables obtained as part of the solution. Consequently, this model represents the effects of turbulence on diffusion of heat and chemical species more accurately than prior formulations do, and may enable more-accurate prediction of mixing and flows of heat in rocket-engine combustion chambers. The model has been implemented within CRUNCH CFD, a proprietary computational fluid dynamics (CFD) computer program, and has been tested within that program. The model could also be implemented within other CFD programs.

  15. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  16. An accurate temperature correction model for thermocouple hygrometers.

    PubMed

    Savage, M J; Cass, A; de Jager, J M

    1982-02-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature.

  17. An accurate temperature correction model for thermocouple hygrometers.

    PubMed

    Savage, M J; Cass, A; de Jager, J M

    1982-02-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature. PMID:16662241

  18. On the importance of having accurate data for astrophysical modelling

    NASA Astrophysics Data System (ADS)

    Lique, Francois

    2016-06-01

    The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.

  19. Accurate method of modeling cluster scaling relations in modified gravity

    NASA Astrophysics Data System (ADS)

    He, Jian-hua; Li, Baojiu

    2016-06-01

    We propose a new method to model cluster scaling relations in modified gravity. Using a suite of nonradiative hydrodynamical simulations, we show that the scaling relations of accumulated gas quantities, such as the Sunyaev-Zel'dovich effect (Compton-y parameter) and the x-ray Compton-y parameter, can be accurately predicted using the known results in the Λ CDM model with a precision of ˜3 % . This method provides a reliable way to analyze the gas physics in modified gravity using the less demanding and much more efficient pure cold dark matter simulations. Our results therefore have important theoretical and practical implications in constraining gravity using cluster surveys.

  20. Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.

    PubMed

    Huynh, Linh; Tagkopoulos, Ilias

    2015-08-21

    In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.

  1. An accurate halo model for fitting non-linear cosmological power spectra and baryonic feedback models

    NASA Astrophysics Data System (ADS)

    Mead, A. J.; Peacock, J. A.; Heymans, C.; Joudaki, S.; Heavens, A. F.

    2015-12-01

    We present an optimized variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of Λ cold dark matter (ΛCDM) and wCDM models, the halo-model power is accurate to ≃ 5 per cent for k ≤ 10h Mpc-1 and z ≤ 2. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS (OverWhelmingly Large Simulations) hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limits on the range of these halo parameters for feedback models investigated by the OWLS simulations. Accurate predictions to high k are vital for weak-lensing surveys, and these halo parameters could be considered nuisance parameters to marginalize over in future analyses to mitigate uncertainty regarding the details of feedback. Finally, we investigate how lensing observables predicted by our model compare to those from simulations and from HALOFIT for a range of k-cuts and feedback models and quantify the angular scales at which these effects become important. Code to calculate power spectra from the model presented in this paper can be found at https://github.com/alexander-mead/hmcode.

  2. Accurate pressure gradient calculations in hydrostatic atmospheric models

    NASA Technical Reports Server (NTRS)

    Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet

    1987-01-01

    A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.

  3. Mouse models of human AML accurately predict chemotherapy response

    PubMed Central

    Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.

    2009-01-01

    The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691

  4. Accurate macromolecular structures using minimal measurements from X-ray free-electron lasers.

    PubMed

    Hattne, Johan; Echols, Nathaniel; Tran, Rosalie; Kern, Jan; Gildea, Richard J; Brewster, Aaron S; Alonso-Mori, Roberto; Glöckner, Carina; Hellmich, Julia; Laksmono, Hartawan; Sierra, Raymond G; Lassalle-Kaiser, Benedikt; Lampe, Alyssa; Han, Guangye; Gul, Sheraz; DiFiore, Dörte; Milathianaki, Despina; Fry, Alan R; Miahnahri, Alan; White, William E; Schafer, Donald W; Seibert, M Marvin; Koglin, Jason E; Sokaras, Dimosthenis; Weng, Tsu-Chien; Sellberg, Jonas; Latimer, Matthew J; Glatzel, Pieter; Zwart, Petrus H; Grosse-Kunstleve, Ralf W; Bogan, Michael J; Messerschmidt, Marc; Williams, Garth J; Boutet, Sébastien; Messinger, Johannes; Zouni, Athina; Yano, Junko; Bergmann, Uwe; Yachandra, Vittal K; Adams, Paul D; Sauter, Nicholas K

    2014-05-01

    X-ray free-electron laser (XFEL) sources enable the use of crystallography to solve three-dimensional macromolecular structures under native conditions and without radiation damage. Results to date, however, have been limited by the challenge of deriving accurate Bragg intensities from a heterogeneous population of microcrystals, while at the same time modeling the X-ray spectrum and detector geometry. Here we present a computational approach designed to extract meaningful high-resolution signals from fewer diffraction measurements.

  5. Chewing simulation with a physically accurate deformable model.

    PubMed

    Pascale, Andra Maria; Ruge, Sebastian; Hauth, Steffen; Kordaß, Bernd; Linsen, Lars

    2015-01-01

    Nowadays, CAD/CAM software is being used to compute the optimal shape and position of a new tooth model meant for a patient. With this possible future application in mind, we present in this article an independent and stand-alone interactive application that simulates the human chewing process and the deformation it produces in the food substrate. Chewing motion sensors are used to produce an accurate representation of the jaw movement. The substrate is represented by a deformable elastic model based on the finite linear elements method, which preserves physical accuracy. Collision detection based on spatial partitioning is used to calculate the forces that are acting on the deformable model. Based on the calculated information, geometry elements are added to the scene to enhance the information available for the user. The goal of the simulation is to present a complete scene to the dentist, highlighting the points where the teeth came into contact with the substrate and giving information about how much force acted at these points, which therefore makes it possible to indicate whether the tooth is being used incorrectly in the mastication process. Real-time interactivity is desired and achieved within limits, depending on the complexity of the employed geometric models. The presented simulation is a first step towards the overall project goal of interactively optimizing tooth position and shape under the investigation of a virtual chewing process using real patient data (Fig 1). PMID:26389135

  6. Models in biology: 'accurate descriptions of our pathetic thinking'.

    PubMed

    Gunawardena, Jeremy

    2014-01-01

    In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as 'predictive', in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders. PMID:24886484

  7. Magnetic field models of nine CP stars from "accurate" measurements

    NASA Astrophysics Data System (ADS)

    Glagolevskij, Yu. V.

    2013-01-01

    The dipole models of magnetic fields in nine CP stars are constructed based on the measurements of metal lines taken from the literature, and performed by the LSD method with an accuracy of 10-80 G. The model parameters are compared with the parameters obtained for the same stars from the hydrogen line measurements. For six out of nine stars the same type of structure was obtained. Some parameters, such as the field strength at the poles B p and the average surface magnetic field B s differ considerably in some stars due to differences in the amplitudes of phase dependences B e (Φ) and B s (Φ), obtained by different authors. It is noted that a significant increase in the measurement accuracy has little effect on the modelling of the large-scale structures of the field. By contrast, it is more important to construct the shape of the phase dependence based on a fairly large number of field measurements, evenly distributed by the rotation period phases. It is concluded that the Zeeman component measurement methods have a strong effect on the shape of the phase dependence, and that the measurements of the magnetic field based on the lines of hydrogen are more preferable for modelling the large-scale structures of the field.

  8. Personalized Orthodontic Accurate Tooth Arrangement System with Complete Teeth Model.

    PubMed

    Cheng, Cheng; Cheng, Xiaosheng; Dai, Ning; Liu, Yi; Fan, Qilei; Hou, Yulin; Jiang, Xiaotong

    2015-09-01

    The accuracy, validity and lack of relation information between dental root and jaw in tooth arrangement are key problems in tooth arrangement technology. This paper aims to describe a newly developed virtual, personalized and accurate tooth arrangement system based on complete information about dental root and skull. Firstly, a feature constraint database of a 3D teeth model is established. Secondly, for computed simulation of tooth movement, the reference planes and lines are defined by the anatomical reference points. The matching mathematical model of teeth pattern and the principle of the specific pose transformation of rigid body are fully utilized. The relation of position between dental root and alveolar bone is considered during the design process. Finally, the relative pose relationships among various teeth are optimized using the object mover, and a personalized therapeutic schedule is formulated. Experimental results show that the virtual tooth arrangement system can arrange abnormal teeth very well and is sufficiently flexible. The relation of position between root and jaw is favorable. This newly developed system is characterized by high-speed processing and quantitative evaluation of the amount of 3D movement of an individual tooth.

  9. Accurate mask model implementation in optical proximity correction model for 14-nm nodes and beyond

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Farys, Vincent; Huguennet, Frederic; Armeanu, Ana-Maria; Bork, Ingo; Chomat, Michael; Buck, Peter; Schanen, Isabelle

    2016-04-01

    In a previous work, we demonstrated that the current optical proximity correction model assuming the mask pattern to be analogous to the designed data is no longer valid. An extreme case of line-end shortening shows a gap up to 10 nm difference (at mask level). For that reason, an accurate mask model has been calibrated for a 14-nm logic gate level. A model with a total RMS of 1.38 nm at mask level was obtained. Two-dimensional structures, such as line-end shortening and corner rounding, were well predicted using scanning electron microscopy pictures overlaid with simulated contours. The first part of this paper is dedicated to the implementation of our improved model in current flow. The improved model consists of a mask model capturing mask process and writing effects, and a standard optical and resist model addressing the litho exposure and development effects at wafer level. The second part will focus on results from the comparison of the two models, the new and the regular.

  10. Accurate mask model implementation in OPC model for 14nm nodes and beyond

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Farys, Vincent; Huguennet, Frederic; Armeanu, Ana-Maria; Bork, Ingo; Chomat, Michael; Buck, Peter; Schanen, Isabelle

    2015-10-01

    In a previous work [1] we demonstrated that current OPC model assuming the mask pattern to be analogous to the designed data is no longer valid. Indeed as depicted in figure 1, an extreme case of line-end shortening shows a gap up to 10 nm difference (at mask level). For that reason an accurate mask model, for a 14nm logic gate level has been calibrated. A model with a total RMS of 1.38nm at mask level was obtained. 2D structures such as line-end shortening and corner rounding were well predicted using SEM pictures overlaid with simulated contours. The first part of this paper is dedicated to the implementation of our improved model in current flow. The improved model consists of a mask model capturing mask process and writing effects and a standard optical and resist model addressing the litho exposure and development effects at wafer level. The second part will focus on results from the comparison of the two models, the new and the regular, as depicted in figure 2.

  11. Screened exchange hybrid density functional for accurate and efficient structures and interaction energies.

    PubMed

    Brandenburg, Jan Gerit; Caldeweyher, Eike; Grimme, Stefan

    2016-06-21

    We extend the recently introduced PBEh-3c global hybrid density functional [S. Grimme et al., J. Chem. Phys., 2015, 143, 054107] by a screened Fock exchange variant based on the Henderson-Janesko-Scuseria exchange hole model. While the excellent performance of the global hybrid is maintained for small covalently bound molecules, its performance for computed condensed phase mass densities is further improved. Most importantly, a speed up of 30 to 50% can be achieved and especially for small orbital energy gap cases, the method is numerically much more robust. The latter point is important for many applications, e.g., for metal-organic frameworks, organic semiconductors, or protein structures. This enables an accurate density functional based electronic structure calculation of a full DNA helix structure on a single core desktop computer which is presented as an example in addition to comprehensive benchmark results. PMID:27240749

  12. The crystallization of biological macromolecules under microgravity: a way to more accurate three-dimensional structures?

    PubMed

    Lorber, Bernard

    2002-09-23

    The crystallization of proteins and other biological particles (including nucleic acids, nucleo-protein complexes and large assemblies such as nucleosomes, ribosomal subunits or viruses) in a microgravity environment can produce crystals having lesser defects than crystals prepared under normal gravity on earth. Such microgravity-grown crystals can diffract X-rays to a higher resolution and have a lower mosaic spread. The inferred electron density maps can be richer in details owing to which more accurate three-dimensional structure models can be built. Major results reported in this field of research are reviewed. Novel ones obtained with the Advanced Protein Crystallization Facility are presented. For structural biology, practical applications and implications associated with crystallization and crystallography onboard the International Space Station are discussed.

  13. Fast and accurate search for non-coding RNA pseudoknot structures in genomes

    PubMed Central

    Huang, Zhibin; Wu, Yong; Robertson, Joseph; Feng, Liang; Malmberg, Russell L.; Cai, Liming

    2008-01-01

    Motivation: Searching genomes for non-coding RNAs (ncRNAs) by their secondary structure has become an important goal for bioinformatics. For pseudoknot-free structures, ncRNA search can be effective based on the covariance model and CYK-type dynamic programming. However, the computational difficulty in aligning an RNA sequence to a pseudoknot has prohibited fast and accurate search of arbitrary RNA structures. Our previous work introduced a graph model for RNA pseudoknots and proposed to solve the structure–sequence alignment by graph optimization. Given k candidate regions in the target sequence for each of the n stems in the structure, we could compute a best alignment in time O(ktn) based upon a tree width t decomposition of the structure graph. However, to implement this method to programs that can routinely perform fast yet accurate RNA pseudoknot searches, we need novel heuristics to ensure that, without degrading the accuracy, only a small number of stem candidates need to be examined and a tree decomposition of a small tree width can always be found for the structure graph. Results: The current work builds on the previous one with newly developed preprocessing algorithms to reduce the values for parameters k and t and to implement the search method into a practical program, called RNATOPS, for RNA pseudoknot search. In particular, we introduce techniques, based on probabilistic profiling and distance penalty functions, which can identify for every stem just a small number k (e.g. k ≤ 10) of plausible regions in the target sequence to which the stem needs to align. We also devised a specialized tree decomposition algorithm that can yield tree decomposition of small tree width t (e.g. t ≤ 4) for almost all RNA structure graphs. Our experiments show that with RNATOPS it is possible to routinely search prokaryotic and eukaryotic genomes for specific RNA structures of medium to large sizes, including pseudoknots, with high sensitivity and high

  14. 3ARM: A Fast, Accurate Radiative Transfer Model for Use in Climate Models

    NASA Technical Reports Server (NTRS)

    Bergstrom, R. W.; Kinne, S.; Sokolik, I. N.; Toon, O. B.; Mlawer, E. J.; Clough, S. A.; Ackerman, T. P.; Mather, J.

    1996-01-01

    A new radiative transfer model combining the efforts of three groups of researchers is discussed. The model accurately computes radiative transfer in a inhomogeneous absorbing, scattering and emitting atmospheres. As an illustration of the model, results are shown for the effects of dust on the thermal radiation.

  15. 3ARM: A Fast, Accurate Radiative Transfer Model for use in Climate Models

    NASA Technical Reports Server (NTRS)

    Bergstrom, R. W.; Kinne, S.; Sokolik, I. N.; Toon, O. B.; Mlawer, E. J.; Clough, S. A.; Ackerman, T. P.; Mather, J.

    1996-01-01

    A new radiative transfer model combining the efforts of three groups of researchers is discussed. The model accurately computes radiative transfer in a inhomogeneous absorbing, scattering and emitting atmospheres. As an illustration of the model, results are shown for the effects of dust on the thermal radiation.

  16. The R-factor gap in macromolecular crystallography: an untapped potential for insights on accurate structures

    PubMed Central

    Holton, James M; Classen, Scott; Frankel, Kenneth A; Tainer, John A

    2014-01-01

    In macromolecular crystallography, the agreement between observed and predicted structure factors (Rcryst and Rfree) is seldom better than 20%. This is much larger than the estimate of experimental error (Rmerge). The difference between Rcryst and Rmerge is the R-factor gap. There is no such gap in small-molecule crystallography, for which calculated structure factors are generally considered more accurate than the experimental measurements. Perhaps the true noise level of macromolecular data is higher than expected? Or is the gap caused by inaccurate phases that trap refined models in local minima? By generating simulated diffraction patterns using the program MLFSOM, and including every conceivable source of experimental error, we show that neither is the case. Processing our simulated data yielded values that were indistinguishable from those of real data for all crystallographic statistics except the final Rcryst and Rfree. These values decreased to 3.8% and 5.5% for simulated data, suggesting that the reason for high R-factors in macromolecular crystallography is neither experimental error nor phase bias, but rather an underlying inadequacy in the models used to explain our observations. The present inability to accurately represent the entire macromolecule with both its flexibility and its protein-solvent interface may be improved by synergies between small-angle X-ray scattering, computational chemistry and crystallography. The exciting implication of our finding is that macromolecular data contain substantial hidden and untapped potential to resolve ambiguities in the true nature of the nanoscale, a task that the second century of crystallography promises to fulfill. Database Coordinates and structure factors for the real data have been submitted to the Protein Data Bank under accession 4tws. PMID:25040949

  17. DR-TAMAS: Diffeomorphic Registration for Tensor Accurate Alignment of Anatomical Structures.

    PubMed

    Irfanoglu, M Okan; Nayak, Amritha; Jenkins, Jeffrey; Hutchinson, Elizabeth B; Sadeghi, Neda; Thomas, Cibu P; Pierpaoli, Carlo

    2016-05-15

    In this work, we propose DR-TAMAS (Diffeomorphic Registration for Tensor Accurate alignMent of Anatomical Structures), a novel framework for intersubject registration of Diffusion Tensor Imaging (DTI) data sets. This framework is optimized for brain data and its main goal is to achieve an accurate alignment of all brain structures, including white matter (WM), gray matter (GM), and spaces containing cerebrospinal fluid (CSF). Currently most DTI-based spatial normalization algorithms emphasize alignment of anisotropic structures. While some diffusion-derived metrics, such as diffusion anisotropy and tensor eigenvector orientation, are highly informative for proper alignment of WM, other tensor metrics such as the trace or mean diffusivity (MD) are fundamental for a proper alignment of GM and CSF boundaries. Moreover, it is desirable to include information from structural MRI data, e.g., T1-weighted or T2-weighted images, which are usually available together with the diffusion data. The fundamental property of DR-TAMAS is to achieve global anatomical accuracy by incorporating in its cost function the most informative metrics locally. Another important feature of DR-TAMAS is a symmetric time-varying velocity-based transformation model, which enables it to account for potentially large anatomical variability in healthy subjects and patients. The performance of DR-TAMAS is evaluated with several data sets and compared with other widely-used diffeomorphic image registration techniques employing both full tensor information and/or DTI-derived scalar maps. Our results show that the proposed method has excellent overall performance in the entire brain, while being equivalent to the best existing methods in WM.

  18. Accurate and interpretable nanoSAR models from genetic programming-based decision tree construction approaches.

    PubMed

    Oksel, Ceyda; Winkler, David A; Ma, Cai Y; Wilkins, Terry; Wang, Xue Z

    2016-09-01

    The number of engineered nanomaterials (ENMs) being exploited commercially is growing rapidly, due to the novel properties they exhibit. Clearly, it is important to understand and minimize any risks to health or the environment posed by the presence of ENMs. Data-driven models that decode the relationships between the biological activities of ENMs and their physicochemical characteristics provide an attractive means of maximizing the value of scarce and expensive experimental data. Although such structure-activity relationship (SAR) methods have become very useful tools for modelling nanotoxicity endpoints (nanoSAR), they have limited robustness and predictivity and, most importantly, interpretation of the models they generate is often very difficult. New computational modelling tools or new ways of using existing tools are required to model the relatively sparse and sometimes lower quality data on the biological effects of ENMs. The most commonly used SAR modelling methods work best with large datasets, are not particularly good at feature selection, can be relatively opaque to interpretation, and may not account for nonlinearity in the structure-property relationships. To overcome these limitations, we describe the application of a novel algorithm, a genetic programming-based decision tree construction tool (GPTree) to nanoSAR modelling. We demonstrate the use of GPTree in the construction of accurate and interpretable nanoSAR models by applying it to four diverse literature datasets. We describe the algorithm and compare model results across the four studies. We show that GPTree generates models with accuracies equivalent to or superior to those of prior modelling studies on the same datasets. GPTree is a robust, automatic method for generation of accurate nanoSAR models with important advantages that it works with small datasets, automatically selects descriptors, and provides significantly improved interpretability of models.

  19. Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation

    NASA Astrophysics Data System (ADS)

    Poddar, Banibrata; Giurgiutiu, Victor

    2016-04-01

    Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.

  20. Clarifying types of uncertainty: when are models accurate, and uncertainties small?

    PubMed

    Cox, Louis Anthony Tony

    2011-10-01

    Professor Aven has recently noted the importance of clarifying the meaning of terms such as "scientific uncertainty" for use in risk management and policy decisions, such as when to trigger application of the precautionary principle. This comment examines some fundamental conceptual challenges for efforts to define "accurate" models and "small" input uncertainties by showing that increasing uncertainty in model inputs may reduce uncertainty in model outputs; that even correct models with "small" input uncertainties need not yield accurate or useful predictions for quantities of interest in risk management (such as the duration of an epidemic); and that accurate predictive models need not be accurate causal models.

  1. Accurate response surface approximations for weight equations based on structural optimization

    NASA Astrophysics Data System (ADS)

    Papila, Melih

    Accurate weight prediction methods are vitally important for aircraft design optimization. Therefore, designers seek weight prediction techniques with low computational cost and high accuracy, and usually require a compromise between the two. The compromise can be achieved by combining stress analysis and response surface (RS) methodology. While stress analysis provides accurate weight information, RS techniques help to transmit effectively this information to the optimization procedure. The focus of this dissertation is structural weight equations in the form of RS approximations and their accuracy when fitted to results of structural optimizations that are based on finite element analyses. Use of RS methodology filters out the numerical noise in structural optimization results and provides a smooth weight function that can easily be used in gradient-based configuration optimization. In engineering applications RS approximations of low order polynomials are widely used, but the weight may not be modeled well by low-order polynomials, leading to bias errors. In addition, some structural optimization results may have high-amplitude errors (outliers) that may severely affect the accuracy of the weight equation. Statistical techniques associated with RS methodology are sought in order to deal with these two difficulties: (1) high-amplitude numerical noise (outliers) and (2) approximation model inadequacy. The investigation starts with reducing approximation error by identifying and repairing outliers. A potential reason for outliers in optimization results is premature convergence, and outliers of such nature may be corrected by employing different convergence settings. It is demonstrated that outlier repair can lead to accuracy improvements over the more standard approach of removing outliers. The adequacy of approximation is then studied by a modified lack-of-fit approach, and RS errors due to the approximation model are reduced by using higher order polynomials. In

  2. Accurate Model Selection of Relaxed Molecular Clocks in Bayesian Phylogenetics

    PubMed Central

    Baele, Guy; Li, Wai Lok Sibon; Drummond, Alexei J.; Suchard, Marc A.; Lemey, Philippe

    2013-01-01

    Recent implementations of path sampling (PS) and stepping-stone sampling (SS) have been shown to outperform the harmonic mean estimator (HME) and a posterior simulation-based analog of Akaike’s information criterion through Markov chain Monte Carlo (AICM), in Bayesian model selection of demographic and molecular clock models. Almost simultaneously, a Bayesian model averaging approach was developed that avoids conditioning on a single model but averages over a set of relaxed clock models. This approach returns estimates of the posterior probability of each clock model through which one can estimate the Bayes factor in favor of the maximum a posteriori (MAP) clock model; however, this Bayes factor estimate may suffer when the posterior probability of the MAP model approaches 1. Here, we compare these two recent developments with the HME, stabilized/smoothed HME (sHME), and AICM, using both synthetic and empirical data. Our comparison shows reassuringly that MAP identification and its Bayes factor provide similar performance to PS and SS and that these approaches considerably outperform HME, sHME, and AICM in selecting the correct underlying clock model. We also illustrate the importance of using proper priors on a large set of empirical data sets. PMID:23090976

  3. Accurate modelling of anisotropic effects in austenitic stainless steel welds

    SciTech Connect

    Nowers, O. D.; Duxbury, D. J.; Drinkwater, B. W.

    2014-02-18

    The ultrasonic inspection of austenitic steel welds is challenging due to the formation of highly anisotropic and heterogeneous structures post-welding. This is due to the intrinsic crystallographic structure of austenitic steel, driving the formation of dendritic grain structures on cooling. The anisotropy is manifested as both a ‘steering’ of the ultrasonic beam and the back-scatter of energy due to the macroscopic granular structure of the weld. However, the quantitative effects and relative impacts of these phenomena are not well-understood. A semi-analytical simulation framework has been developed to allow the study of anisotropic effects in austenitic stainless steel welds. Frequency-dependent scatterers are allocated to a weld-region to approximate the coarse grain-structures observed within austenitic welds and imaged using a simulated array. The simulated A-scans are compared against an equivalent experimental setup demonstrating excellent agreement of the Signal to Noise (S/N) ratio. Comparison of images of the simulated and experimental data generated using the Total Focusing Method (TFM) indicate a prominent layered effect in the simulated data. A superior grain allocation routine is required to improve upon this.

  4. Toward Hamiltonian Adaptive QM/MM: Accurate Solvent Structures Using Many-Body Potentials.

    PubMed

    Boereboom, Jelle M; Potestio, Raffaello; Donadio, Davide; Bulo, Rosa E

    2016-08-01

    Adaptive quantum mechanical (QM)/molecular mechanical (MM) methods enable efficient molecular simulations of chemistry in solution. Reactive subregions are modeled with an accurate QM potential energy expression while the rest of the system is described in a more approximate manner (MM). As solvent molecules diffuse in and out of the reactive region, they are gradually included into (and excluded from) the QM expression. It would be desirable to model such a system with a single adaptive Hamiltonian, but thus far this has resulted in distorted structures at the boundary between the two regions. Solving this long outstanding problem will allow microcanonical adaptive QM/MM simulations that can be used to obtain vibrational spectra and dynamical properties. The difficulty lies in the complex QM potential energy expression, with a many-body expansion that contains higher order terms. Here, we outline a Hamiltonian adaptive multiscale scheme within the framework of many-body potentials. The adaptive expressions are entirely general, and complementary to all standard (nonadaptive) QM/MM embedding schemes available. We demonstrate the merit of our approach on a molecular system defined by two different MM potentials (MM/MM'). For the long-range interactions a numerical scheme is used (particle mesh Ewald), which yields energy expressions that are many-body in nature. Our Hamiltonian approach is the first to provide both energy conservation and the correct solvent structure everywhere in this system. PMID:27332140

  5. Towards an Accurate Performance Modeling of Parallel SparseFactorization

    SciTech Connect

    Grigori, Laura; Li, Xiaoye S.

    2006-05-26

    We present a performance model to analyze a parallel sparseLU factorization algorithm on modern cached-based, high-end parallelarchitectures. Our model characterizes the algorithmic behavior bytakingaccount the underlying processor speed, memory system performance, aswell as the interconnect speed. The model is validated using theSuperLU_DIST linear system solver, the sparse matrices from realapplications, and an IBM POWER3 parallel machine. Our modelingmethodology can be easily adapted to study performance of other types ofsparse factorizations, such as Cholesky or QR.

  6. Low-energy structures of benzene clusters with a novel accurate potential surface.

    PubMed

    Bartolomei, M; Pirani, F; Marques, J M C

    2015-12-01

    The benzene-benzene (Bz-Bz) interaction is present in several chemical systems and it is known to be crucial in understanding the specificity of important biological phenomena. In this work, we propose a novel Bz-Bz analytical potential energy surface which is fine-tuned on accurate ab initio calculations in order to improve its reliability. Once the Bz-Bz interaction is modeled, an analytical function for the energy of the Bzn clusters may be obtained by summing up over all pair potentials. We apply an evolutionary algorithm (EA) to discover the lowest-energy structures of Bzn clusters (for n=2-25), and the results are compared with previous global optimization studies where different potential functions were employed. Besides the global minimum, the EA also gives the structures of other low-lying isomers ranked by the corresponding energy. Additional ab initio calculations are carried out for the low-lying isomers of Bz3 and Bz4 clusters, and the global minimum is confirmed as the most stable structure for both sizes. Finally, a detailed analysis of the low-energy isomers of the n = 13 and 19 magic-number clusters is performed. The two lowest-energy Bz13 isomers show S6 and C3 symmetry, respectively, which is compatible with the experimental results available in the literature. The Bz19 structures reported here are all non-symmetric, showing two central Bz molecules surrounded by 12 nearest-neighbor monomers in the case of the five lowest-energy structures.

  7. Accurate Low-mass Stellar Models of KOI-126

    NASA Astrophysics Data System (ADS)

    Feiden, Gregory A.; Chaboyer, Brian; Dotter, Aaron

    2011-10-01

    The recent discovery of an eclipsing hierarchical triple system with two low-mass stars in a close orbit (KOI-126) by Carter et al. appeared to reinforce the evidence that theoretical stellar evolution models are not able to reproduce the observational mass-radius relation for low-mass stars. We present a set of stellar models for the three stars in the KOI-126 system that show excellent agreement with the observed radii. This agreement appears to be due to the equation of state implemented by our code. A significant dispersion in the observed mass-radius relation for fully convective stars is demonstrated; indicative of the influence of physics currently not incorporated in standard stellar evolution models. We also predict apsidal motion constants for the two M dwarf companions. These values should be observationally determined to within 1% by the end of the Kepler mission.

  8. Inflation model building with an accurate measure of e -folding

    NASA Astrophysics Data System (ADS)

    Chongchitnan, Sirichai

    2016-08-01

    It has become standard practice to take the logarithmic growth of the scale factor as a measure of the amount of inflation, despite the well-known fact that this is only an approximation for the true amount of inflation required to solve the horizon and flatness problems. The aim of this work is to show how this approximation can be completely avoided using an alternative framework for inflation model building. We show that using the inverse Hubble radius, H =a H , as the key dynamical parameter, the correct number of e -folding arises naturally as a measure of inflation. As an application, we present an interesting model in which the entire inflationary dynamics can be solved analytically and exactly, and, in special cases, reduces to the familiar class of power-law models.

  9. Comparative Protein Structure Modeling Using MODELLER.

    PubMed

    Webb, Benjamin; Sali, Andrej

    2014-09-08

    Functional characterization of a protein sequence is one of the most frequent problems in biology. This task is usually facilitated by accurate three-dimensional (3-D) structure of the studied protein. In the absence of an experimentally determined structure, comparative or homology modeling can sometimes provide a useful 3-D model for a protein that is related to at least one known protein structure. Comparative modeling predicts the 3-D structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described.

  10. Towards more accurate numerical modeling of impedance based high frequency harmonic vibration

    NASA Astrophysics Data System (ADS)

    Lim, Yee Yan; Kiong Soh, Chee

    2014-03-01

    The application of smart materials in various fields of engineering has recently become increasingly popular. For instance, the high frequency based electromechanical impedance (EMI) technique employing smart piezoelectric materials is found to be versatile in structural health monitoring (SHM). Thus far, considerable efforts have been made to study and improve the technique. Various theoretical models of the EMI technique have been proposed in an attempt to better understand its behavior. So far, the three-dimensional (3D) coupled field finite element (FE) model has proved to be the most accurate. However, large discrepancies between the results of the FE model and experimental tests, especially in terms of the slope and magnitude of the admittance signatures, continue to exist and are yet to be resolved. This paper presents a series of parametric studies using the 3D coupled field finite element method (FEM) on all properties of materials involved in the lead zirconate titanate (PZT) structure interaction of the EMI technique, to investigate their effect on the admittance signatures acquired. FE model updating is then performed by adjusting the parameters to match the experimental results. One of the main reasons for the lower accuracy, especially in terms of magnitude and slope, of previous FE models is the difficulty in determining the damping related coefficients and the stiffness of the bonding layer. In this study, using the hysteretic damping model in place of Rayleigh damping, which is used by most researchers in this field, and updated bonding stiffness, an improved and more accurate FE model is achieved. The results of this paper are expected to be useful for future study of the subject area in terms of research and application, such as modeling, design and optimization.

  11. Accurate first principles model potentials for intermolecular interactions.

    PubMed

    Gordon, Mark S; Smith, Quentin A; Xu, Peng; Slipchenko, Lyudmila V

    2013-01-01

    The general effective fragment potential (EFP) method provides model potentials for any molecule that is derived from first principles, with no empirically fitted parameters. The EFP method has been interfaced with most currently used ab initio single-reference and multireference quantum mechanics (QM) methods, ranging from Hartree-Fock and coupled cluster theory to multireference perturbation theory. The most recent innovations in the EFP model have been to make the computationally expensive charge transfer term much more efficient and to interface the general EFP dispersion and exchange repulsion interactions with QM methods. Following a summary of the method and its implementation in generally available computer programs, these most recent new developments are discussed.

  12. Simulation model accurately estimates total dietary iodine intake.

    PubMed

    Verkaik-Kloosterman, Janneke; van 't Veer, Pieter; Ocké, Marga C

    2009-07-01

    One problem with estimating iodine intake is the lack of detailed data about the discretionary use of iodized kitchen salt and iodization of industrially processed foods. To be able to take into account these uncertainties in estimating iodine intake, a simulation model combining deterministic and probabilistic techniques was developed. Data from the Dutch National Food Consumption Survey (1997-1998) and an update of the Food Composition database were used to simulate 3 different scenarios: Dutch iodine legislation until July 2008, Dutch iodine legislation after July 2008, and a potential future situation. Results from studies measuring iodine excretion during the former legislation are comparable with the iodine intakes estimated with our model. For both former and current legislation, iodine intake was adequate for a large part of the Dutch population, but some young children (<5%) were at risk of intakes that were too low. In the scenario of a potential future situation using lower salt iodine levels, the percentage of the Dutch population with intakes that were too low increased (almost 10% of young children). To keep iodine intakes adequate, salt iodine levels should not be decreased, unless many more foods will contain iodized salt. Our model should be useful in predicting the effects of food reformulation or fortification on habitual nutrient intakes.

  13. Structural system identification: Structural dynamics model validation

    SciTech Connect

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  14. Accurate two-dimensional model of an arrayed-waveguide grating demultiplexer and optimal design based on the reciprocity theory.

    PubMed

    Dai, Daoxin; He, Sailing

    2004-12-01

    An accurate two-dimensional (2D) model is introduced for the simulation of an arrayed-waveguide grating (AWG) demultiplexer by integrating the field distribution along the vertical direction. The equivalent 2D model has almost the same accuracy as the original three-dimensional model and is more accurate for the AWG considered here than the conventional 2D model based on the effective-index method. To further improve the computational efficiency, the reciprocity theory is applied to the optimal design of a flat-top AWG demultiplexer with a special input structure.

  15. Accurate numerical solutions for elastic-plastic models. [LMFBR

    SciTech Connect

    Schreyer, H. L.; Kulak, R. F.; Kramer, J. M.

    1980-03-01

    The accuracy of two integration algorithms is studied for the common engineering condition of a von Mises, isotropic hardening model under plane stress. Errors in stress predictions for given total strain increments are expressed with contour plots of two parameters: an angle in the pi plane and the difference between the exact and computed yield-surface radii. The two methods are the tangent-predictor/radial-return approach and the elastic-predictor/radial-corrector algorithm originally developed by Mendelson. The accuracy of a combined tangent-predictor/radial-corrector algorithm is also investigated.

  16. Accurate Modeling of X-ray Extinction by Interstellar Grains

    NASA Astrophysics Data System (ADS)

    Hoffman, John; Draine, B. T.

    2016-02-01

    Interstellar abundance determinations from fits to X-ray absorption edges often rely on the incorrect assumption that scattering is insignificant and can be ignored. We show instead that scattering contributes significantly to the attenuation of X-rays for realistic dust grain size distributions and substantially modifies the spectrum near absorption edges of elements present in grains. The dust attenuation modules used in major X-ray spectral fitting programs do not take this into account. We show that the consequences of neglecting scattering on the determination of interstellar elemental abundances are modest; however, scattering (along with uncertainties in the grain size distribution) must be taken into account when near-edge extinction fine structure is used to infer dust mineralogy. We advertise the benefits and accuracy of anomalous diffraction theory for both X-ray halo analysis and near edge absorption studies. We present an open source Fortran suite, General Geometry Anomalous Diffraction Theory (GGADT), that calculates X-ray absorption, scattering, and differential scattering cross sections for grains of arbitrary geometry and composition.

  17. COSMOS: accurate detection of somatic structural variations through asymmetric comparison between tumor and normal samples.

    PubMed

    Yamagata, Koichi; Yamanishi, Ayako; Kokubu, Chikara; Takeda, Junji; Sese, Jun

    2016-05-01

    An important challenge in cancer genomics is precise detection of structural variations (SVs) by high-throughput short-read sequencing, which is hampered by the high false discovery rates of existing analysis tools. Here, we propose an accurate SV detection method named COSMOS, which compares the statistics of the mapped read pairs in tumor samples with isogenic normal control samples in a distinct asymmetric manner. COSMOS also prioritizes the candidate SVs using strand-specific read-depth information. Performance tests on modeled tumor genomes revealed that COSMOS outperformed existing methods in terms of F-measure. We also applied COSMOS to an experimental mouse cell-based model, in which SVs were induced by genome engineering and gamma-ray irradiation, followed by polymerase chain reaction-based confirmation. The precision of COSMOS was 84.5%, while the next best existing method was 70.4%. Moreover, the sensitivity of COSMOS was the highest, indicating that COSMOS has great potential for cancer genome analysis.

  18. COSMOS: accurate detection of somatic structural variations through asymmetric comparison between tumor and normal samples

    PubMed Central

    Yamagata, Koichi; Yamanishi, Ayako; Kokubu, Chikara; Takeda, Junji; Sese, Jun

    2016-01-01

    An important challenge in cancer genomics is precise detection of structural variations (SVs) by high-throughput short-read sequencing, which is hampered by the high false discovery rates of existing analysis tools. Here, we propose an accurate SV detection method named COSMOS, which compares the statistics of the mapped read pairs in tumor samples with isogenic normal control samples in a distinct asymmetric manner. COSMOS also prioritizes the candidate SVs using strand-specific read-depth information. Performance tests on modeled tumor genomes revealed that COSMOS outperformed existing methods in terms of F-measure. We also applied COSMOS to an experimental mouse cell-based model, in which SVs were induced by genome engineering and gamma-ray irradiation, followed by polymerase chain reaction-based confirmation. The precision of COSMOS was 84.5%, while the next best existing method was 70.4%. Moreover, the sensitivity of COSMOS was the highest, indicating that COSMOS has great potential for cancer genome analysis. PMID:26833260

  19. Accurate determination of atomic structure of multiwalled carbon nanotubes by nondestructive nanobeam electron diffraction

    SciTech Connect

    Liu Zejian; Zhang Qi; Qin Luchang

    2005-05-09

    We report a method that allows direct, systematic, and accurate determination of the atomic structure of multiwalled carbon nanotubes by analyzing the scattering intensities on the nonequatorial layer lines in the electron diffraction pattern. Complete structure determination of a quadruple-walled carbon nanotube is described as an example, and it was found that the intertubular distance varied from 0.36 nm to 0.5 nm with a mean value of 0.42 nm.

  20. Accurate description of aqueous carbonate ions: an effective polarization model verified by neutron scattering.

    PubMed

    Mason, Philip E; Wernersson, Erik; Jungwirth, Pavel

    2012-07-19

    The carbonate ion plays a central role in the biochemical formation of the shells of aquatic life, which is an important path for carbon dioxide sequestration. Given the vital role of carbonate in this and other contexts, it is imperative to develop accurate models for such a high charge density ion. As a divalent ion, carbonate has a strong polarizing effect on surrounding water molecules. This raises the question whether it is possible to describe accurately such systems without including polarization. It has recently been suggested the lack of electronic polarization in nonpolarizable water models can be effectively compensated by introducing an electronic dielectric continuum, which is with respect to the forces between atoms equivalent to rescaling the ionic charges. Given how widely nonpolarizable models are used to model electrolyte solutions, establishing the experimental validity of this suggestion is imperative. Here, we examine a stringent test for such models: a comparison of the difference of the neutron scattering structure factors of K2CO3 vs KNO3 solutions and that predicted by molecular dynamics simulations for various models of the same systems. We compare standard nonpolarizable simulations in SPC/E water to analogous simulations with effective ion charges, as well as simulations in explicitly polarizable POL3 water (which, however, has only about half the experimental polarizability). It is found that the simulation with rescaled charges is in a very good agreement with the experimental data, which is significantly better than for the nonpolarizable simulation and even better than for the explicitly polarizable POL3 model.

  1. Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.; Dayton, James A., Jr.

    1997-01-01

    Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.

  2. Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.; Dayton, J. A., Jr.

    1998-01-01

    Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional (3-D) electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.

  3. Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.; Dayton, James A., Jr.

    1998-01-01

    Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional (3-D) electromagnetic computer code, MAxwell's equations by the Finite Integration Algorithm (MAFIA). Cold-test parameters have been calculated for several helical traveLing-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making It possible, for the first time, to design complete TWT via computer simulation.

  4. Accurate Electron Affinity of Iron and Fine Structures of Negative Iron ions

    PubMed Central

    Chen, Xiaolin; Luo, Zhihong; Li, Jiaming; Ning, Chuangang

    2016-01-01

    Ionization potential (IP) is defined as the amount of energy required to remove the most loosely bound electron of an atom, while electron affinity (EA) is defined as the amount of energy released when an electron is attached to a neutral atom. Both IP and EA are critical for understanding chemical properties of an element. In contrast to accurate IPs and structures of neutral atoms, EAs and structures of negative ions are relatively unexplored, especially for the transition metal anions. Here, we report the accurate EA value of Fe and fine structures of Fe− using the slow electron velocity imaging method. These measurements yield a very accurate EA value of Fe, 1235.93(28) cm−1 or 153.236(34) meV. The fine structures of Fe− were also successfully resolved. The present work provides a reliable benchmark for theoretical calculations, and also paves the way for improving the EA measurements of other transition metal atoms to the sub cm−1 accuracy. PMID:27138292

  5. Accurate Electron Affinity of Iron and Fine Structures of Negative Iron ions

    NASA Astrophysics Data System (ADS)

    Chen, Xiaolin; Luo, Zhihong; Li, Jiaming; Ning, Chuangang

    2016-05-01

    Ionization potential (IP) is defined as the amount of energy required to remove the most loosely bound electron of an atom, while electron affinity (EA) is defined as the amount of energy released when an electron is attached to a neutral atom. Both IP and EA are critical for understanding chemical properties of an element. In contrast to accurate IPs and structures of neutral atoms, EAs and structures of negative ions are relatively unexplored, especially for the transition metal anions. Here, we report the accurate EA value of Fe and fine structures of Fe‑ using the slow electron velocity imaging method. These measurements yield a very accurate EA value of Fe, 1235.93(28) cm‑1 or 153.236(34) meV. The fine structures of Fe‑ were also successfully resolved. The present work provides a reliable benchmark for theoretical calculations, and also paves the way for improving the EA measurements of other transition metal atoms to the sub cm‑1 accuracy.

  6. Accurate Electron Affinity of Iron and Fine Structures of Negative Iron ions.

    PubMed

    Chen, Xiaolin; Luo, Zhihong; Li, Jiaming; Ning, Chuangang

    2016-01-01

    Ionization potential (IP) is defined as the amount of energy required to remove the most loosely bound electron of an atom, while electron affinity (EA) is defined as the amount of energy released when an electron is attached to a neutral atom. Both IP and EA are critical for understanding chemical properties of an element. In contrast to accurate IPs and structures of neutral atoms, EAs and structures of negative ions are relatively unexplored, especially for the transition metal anions. Here, we report the accurate EA value of Fe and fine structures of Fe(-) using the slow electron velocity imaging method. These measurements yield a very accurate EA value of Fe, 1235.93(28) cm(-1) or 153.236(34) meV. The fine structures of Fe(-) were also successfully resolved. The present work provides a reliable benchmark for theoretical calculations, and also paves the way for improving the EA measurements of other transition metal atoms to the sub cm(-1) accuracy. PMID:27138292

  7. Construction of feasible and accurate kinetic models of metabolism: A Bayesian approach.

    PubMed

    Saa, Pedro A; Nielsen, Lars K

    2016-01-01

    Kinetic models are essential to quantitatively understand and predict the behaviour of metabolic networks. Detailed and thermodynamically feasible kinetic models of metabolism are inherently difficult to formulate and fit. They have a large number of heterogeneous parameters, are non-linear and have complex interactions. Many powerful fitting strategies are ruled out by the intractability of the likelihood function. Here, we have developed a computational framework capable of fitting feasible and accurate kinetic models using Approximate Bayesian Computation. This framework readily supports advanced modelling features such as model selection and model-based experimental design. We illustrate this approach on the tightly-regulated mammalian methionine cycle. Sampling from the posterior distribution, the proposed framework generated thermodynamically feasible parameter samples that converged on the true values, and displayed remarkable prediction accuracy in several validation tests. Furthermore, a posteriori analysis of the parameter distributions enabled appraisal of the systems properties of the network (e.g., control structure) and key metabolic regulations. Finally, the framework was used to predict missing allosteric interactions. PMID:27417285

  8. Construction of feasible and accurate kinetic models of metabolism: A Bayesian approach

    PubMed Central

    Saa, Pedro A.; Nielsen, Lars K.

    2016-01-01

    Kinetic models are essential to quantitatively understand and predict the behaviour of metabolic networks. Detailed and thermodynamically feasible kinetic models of metabolism are inherently difficult to formulate and fit. They have a large number of heterogeneous parameters, are non-linear and have complex interactions. Many powerful fitting strategies are ruled out by the intractability of the likelihood function. Here, we have developed a computational framework capable of fitting feasible and accurate kinetic models using Approximate Bayesian Computation. This framework readily supports advanced modelling features such as model selection and model-based experimental design. We illustrate this approach on the tightly-regulated mammalian methionine cycle. Sampling from the posterior distribution, the proposed framework generated thermodynamically feasible parameter samples that converged on the true values, and displayed remarkable prediction accuracy in several validation tests. Furthermore, a posteriori analysis of the parameter distributions enabled appraisal of the systems properties of the network (e.g., control structure) and key metabolic regulations. Finally, the framework was used to predict missing allosteric interactions. PMID:27417285

  9. Development of a New Model for Accurate Prediction of Cloud Water Deposition on Vegetation

    NASA Astrophysics Data System (ADS)

    Katata, G.; Nagai, H.; Wrzesinsky, T.; Klemm, O.; Eugster, W.; Burkard, R.

    2006-12-01

    Scarcity of water resources in arid and semi-arid areas is of great concern in the light of population growth and food shortages. Several experiments focusing on cloud (fog) water deposition on the land surface suggest that cloud water plays an important role in water resource in such regions. A one-dimensional vegetation model including the process of cloud water deposition on vegetation has been developed to better predict cloud water deposition on the vegetation. New schemes to calculate capture efficiency of leaf, cloud droplet size distribution, and gravitational flux of cloud water were incorporated in the model. Model calculations were compared with the data acquired at the Norway spruce forest at the Waldstein site, Germany. High performance of the model was confirmed by comparisons of calculated net radiation, sensible and latent heat, and cloud water fluxes over the forest with measurements. The present model provided a better prediction of measured turbulent and gravitational fluxes of cloud water over the canopy than the Lovett model, which is a commonly used cloud water deposition model. Detailed calculations of evapotranspiration and of turbulent exchange of heat and water vapor within the canopy and the modifications are necessary for accurate prediction of cloud water deposition. Numerical experiments to examine the dependence of cloud water deposition on the vegetation species (coniferous and broad-leaved trees, flat and cylindrical grasses) and structures (Leaf Area Index (LAI) and canopy height) are performed using the presented model. The results indicate that the differences of leaf shape and size have a large impact on cloud water deposition. Cloud water deposition also varies with the growth of vegetation and seasonal change of LAI. We found that the coniferous trees whose height and LAI are 24 m and 2.0 m2m-2, respectively, produce the largest amount of cloud water deposition in all combinations of vegetation species and structures in the

  10. Efficient and accurate approach to modeling the microstructure and defect properties of LaCoO3

    NASA Astrophysics Data System (ADS)

    Buckeridge, J.; Taylor, F. H.; Catlow, C. R. A.

    2016-04-01

    Complex perovskite oxides are promising materials for cathode layers in solid oxide fuel cells. Such materials have intricate electronic, magnetic, and crystalline structures that prove challenging to model accurately. We analyze a wide range of standard density functional theory approaches to modeling a highly promising system, the perovskite LaCoO3, focusing on optimizing the Hubbard U parameter to treat the self-interaction of the B-site cation's d states, in order to determine the most appropriate method to study defect formation and the effect of spin on local structure. By calculating structural and electronic properties for different magnetic states we determine that U =4 eV for Co in LaCoO3 agrees best with available experiments. We demonstrate that the generalized gradient approximation (PBEsol +U ) is most appropriate for studying structure versus spin state, while the local density approximation (LDA +U ) is most appropriate for determining accurate energetics for defect properties.

  11. An Approach to More Accurate Model Systems for Purple Acid Phosphatases (PAPs).

    PubMed

    Bernhardt, Paul V; Bosch, Simone; Comba, Peter; Gahan, Lawrence R; Hanson, Graeme R; Mereacre, Valeriu; Noble, Christopher J; Powell, Annie K; Schenk, Gerhard; Wadepohl, Hubert

    2015-08-01

    The active site of mammalian purple acid phosphatases (PAPs) have a dinuclear iron site in two accessible oxidation states (Fe(III)2 and Fe(III)Fe(II)), and the heterovalent is the active form, involved in the regulation of phosphate and phosphorylated metabolite levels in a wide range of organisms. Therefore, two sites with different coordination geometries to stabilize the heterovalent active form and, in addition, with hydrogen bond donors to enable the fixation of the substrate and release of the product, are believed to be required for catalytically competent model systems. Two ligands and their dinuclear iron complexes have been studied in detail. The solid-state structures and properties, studied by X-ray crystallography, magnetism, and Mössbauer spectroscopy, and the solution structural and electronic properties, investigated by mass spectrometry, electronic, nuclear magnetic resonance (NMR), electron paramagnetic resonance (EPR), and Mössbauer spectroscopies and electrochemistry, are discussed in detail in order to understand the structures and relative stabilities in solution. In particular, with one of the ligands, a heterovalent Fe(III)Fe(II) species has been produced by chemical oxidation of the Fe(II)2 precursor. The phosphatase reactivities of the complexes, in particular, also of the heterovalent complex, are reported. These studies include pH-dependent as well as substrate concentration dependent studies, leading to pH profiles, catalytic efficiencies and turnover numbers, and indicate that the heterovalent diiron complex discussed here is an accurate PAP model system. PMID:26196255

  12. Validation of an Accurate Three-Dimensional Helical Slow-Wave Circuit Model

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    1997-01-01

    The helical slow-wave circuit embodies a helical coil of rectangular tape supported in a metal barrel by dielectric support rods. Although the helix slow-wave circuit remains the mainstay of the traveling-wave tube (TWT) industry because of its exceptionally wide bandwidth, a full helical circuit, without significant dimensional approximations, has not been successfully modeled until now. Numerous attempts have been made to analyze the helical slow-wave circuit so that the performance could be accurately predicted without actually building it, but because of its complex geometry, many geometrical approximations became necessary rendering the previous models inaccurate. In the course of this research it has been demonstrated that using the simulation code, MAFIA, the helical structure can be modeled with actual tape width and thickness, dielectric support rod geometry and materials. To demonstrate the accuracy of the MAFIA model, the cold-test parameters including dispersion, on-axis interaction impedance and attenuation have been calculated for several helical TWT slow-wave circuits with a variety of support rod geometries including rectangular and T-shaped rods, as well as various support rod materials including isotropic, anisotropic and partially metal coated dielectrics. Compared with experimentally measured results, the agreement is excellent. With the accuracy of the MAFIA helical model validated, the code was used to investigate several conventional geometric approximations in an attempt to obtain the most computationally efficient model. Several simplifications were made to a standard model including replacing the helical tape with filaments, and replacing rectangular support rods with shapes conforming to the cylindrical coordinate system with effective permittivity. The approximate models are compared with the standard model in terms of cold-test characteristics and computational time. The model was also used to determine the sensitivity of various

  13. Accurate force fields and methods for modelling organic molecular crystals at finite temperatures.

    PubMed

    Nyman, Jonas; Pundyke, Orla Sheehan; Day, Graeme M

    2016-06-21

    We present an assessment of the performance of several force fields for modelling intermolecular interactions in organic molecular crystals using the X23 benchmark set. The performance of the force fields is compared to several popular dispersion corrected density functional methods. In addition, we present our implementation of lattice vibrational free energy calculations in the quasi-harmonic approximation, using several methods to account for phonon dispersion. This allows us to also benchmark the force fields' reproduction of finite temperature crystal structures. The results demonstrate that anisotropic atom-atom multipole-based force fields can be as accurate as several popular DFT-D methods, but have errors 2-3 times larger than the current best DFT-D methods. The largest error in the examined force fields is a systematic underestimation of the (absolute) lattice energy.

  14. CoMOGrad and PHOG: From Computer Vision to Fast and Accurate Protein Tertiary Structure Retrieval

    PubMed Central

    Karim, Rezaul; Aziz, Mohd. Momin Al; Shatabda, Swakkhar; Rahman, M. Sohel; Mia, Md. Abul Kashem; Zaman, Farhana; Rakin, Salman

    2015-01-01

    The number of entries in a structural database of proteins is increasing day by day. Methods for retrieving protein tertiary structures from such a large database have turn out to be the key to comparative analysis of structures that plays an important role to understand proteins and their functions. In this paper, we present fast and accurate methods for the retrieval of proteins having tertiary structures similar to a query protein from a large database. Our proposed methods borrow ideas from the field of computer vision. The speed and accuracy of our methods come from the two newly introduced features- the co-occurrence matrix of the oriented gradient and pyramid histogram of oriented gradient- and the use of Euclidean distance as the distance measure. Experimental results clearly indicate the superiority of our approach in both running time and accuracy. Our method is readily available for use from this website: http://research.buet.ac.bd:8080/Comograd/. PMID:26293226

  15. MONA: An accurate two-phase well flow model based on phase slippage

    SciTech Connect

    Asheim, H.

    1984-10-01

    In two phase flow, holdup and pressure loss are related to interfacial slippage. A model based on the slippage concept has been developed and tested using production well data from Forties, the Ekofisk area, and flowline data from Prudhoe Bay. The model developed turned out considerably more accurate than the standard models used for comparison.

  16. Accurate assessment of mass, models and resolution by small-angle scattering

    PubMed Central

    Rambo, Robert P.; Tainer, John A.

    2013-01-01

    Modern small angle scattering (SAS) experiments with X-rays or neutrons provide a comprehensive, resolution-limited observation of the thermodynamic state. However, methods for evaluating mass and validating SAS based models and resolution have been inadequate. Here, we define the volume-of-correlation, Vc: a SAS invariant derived from the scattered intensities that is specific to the structural state of the particle, yet independent of concentration and the requirements of a compact, folded particle. We show Vc defines a ratio, Qr, that determines the molecular mass of proteins or RNA ranging from 10 to 1,000 kDa. Furthermore, we propose a statistically robust method for assessing model-data agreements (X2free) akin to cross-validation. Our approach prevents over-fitting of the SAS data and can be used with a newly defined metric, Rsas, for quantitative evaluation of resolution. Together, these metrics (Vc, Qr, X2free, and Rsas) provide analytical tools for unbiased and accurate macromolecular structural characterizations in solution. PMID:23619693

  17. Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images

    NASA Technical Reports Server (NTRS)

    Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.

    1999-01-01

    Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.

  18. Accurate structure and dynamics of the metal-site of paramagnetic metalloproteins from NMR parameters using natural bond orbitals.

    PubMed

    Hansen, D Flemming; Westler, William M; Kunze, Micha B A; Markley, John L; Weinhold, Frank; Led, Jens J

    2012-03-14

    A natural bond orbital (NBO) analysis of unpaired electron spin density in metalloproteins is presented, which allows a fast and robust calculation of paramagnetic NMR parameters. Approximately 90% of the unpaired electron spin density occupies metal-ligand NBOs, allowing the majority of the density to be modeled by only a few NBOs that reflect the chemical bonding environment. We show that the paramagnetic relaxation rate of protons can be calculated accurately using only the metal-ligand NBOs and that these rates are in good agreement with corresponding rates measured experimentally. This holds, in particular, for protons of ligand residues where the point-dipole approximation breaks down. To describe the paramagnetic relaxation of heavy nuclei, also the electron spin density in the local orbitals must be taken into account. Geometric distance restraints for (15)N can be derived from the paramagnetic relaxation enhancement and the Fermi contact shift when local NBOs are included in the analysis. Thus, the NBO approach allows us to include experimental paramagnetic NMR parameters of (15)N nuclei as restraints in a structure optimization protocol. We performed a molecular dynamics simulation and structure determination of oxidized rubredoxin using the experimentally obtained paramagnetic NMR parameters of (15)N. The corresponding structures obtained are in good agreement with the crystal structure of rubredoxin. Thus, the NBO approach allows an accurate description of the geometric structure and the dynamics of metalloproteins, when NMR parameters are available of nuclei in the immediate vicinity of the metal-site.

  19. Accurate Structure and Dynamics of the Metal-Site of Paramagnetic Metalloproteins from NMR Parameters Using Natural Bond Orbitals

    PubMed Central

    2012-01-01

    A natural bond orbital (NBO) analysis of unpaired electron spin density in metalloproteins is presented, which allows a fast and robust calculation of paramagnetic NMR parameters. Approximately 90% of the unpaired electron spin density occupies metal–ligand NBOs, allowing the majority of the density to be modeled by only a few NBOs that reflect the chemical bonding environment. We show that the paramagnetic relaxation rate of protons can be calculated accurately using only the metal–ligand NBOs and that these rates are in good agreement with corresponding rates measured experimentally. This holds, in particular, for protons of ligand residues where the point-dipole approximation breaks down. To describe the paramagnetic relaxation of heavy nuclei, also the electron spin density in the local orbitals must be taken into account. Geometric distance restraints for 15N can be derived from the paramagnetic relaxation enhancement and the Fermi contact shift when local NBOs are included in the analysis. Thus, the NBO approach allows us to include experimental paramagnetic NMR parameters of 15N nuclei as restraints in a structure optimization protocol. We performed a molecular dynamics simulation and structure determination of oxidized rubredoxin using the experimentally obtained paramagnetic NMR parameters of 15N. The corresponding structures obtained are in good agreement with the crystal structure of rubredoxin. Thus, the NBO approach allows an accurate description of the geometric structure and the dynamics of metalloproteins, when NMR parameters are available of nuclei in the immediate vicinity of the metal-site. PMID:22329704

  20. Accurate Damage Location in Complex Composite Structures and Industrial Environments using Acoustic Emission

    NASA Astrophysics Data System (ADS)

    Eaton, M.; Pearson, M.; Lee, W.; Pullin, R.

    2015-07-01

    The ability to accurately locate damage in any given structure is a highly desirable attribute for an effective structural health monitoring system and could help to reduce operating costs and improve safety. This becomes a far greater challenge in complex geometries and materials, such as modern composite airframes. The poor translation of promising laboratory based SHM demonstrators to industrial environments forms a barrier to commercial up take of technology. The acoustic emission (AE) technique is a passive NDT method that detects elastic stress waves released by the growth of damage. It offers very sensitive damage detection, using a sparse array of sensors to detect and globally locate damage within a structure. However its application to complex structures commonly yields poor accuracy due to anisotropic wave propagation and the interruption of wave propagation by structural features such as holes and thickness changes. This work adopts an empirical mapping technique for AE location, known as Delta T Mapping, which uses experimental training data to account for such structural complexities. The technique is applied to a complex geometry composite aerospace structure undergoing certification testing. The component consists of a carbon fibre composite tube with varying wall thickness and multiple holes, that was loaded under bending. The damage location was validated using X-ray CT scanning and the Delta T Mapping technique was shown to improve location accuracy when compared with commercial algorithms. The onset and progression of damage were monitored throughout the test and used to inform future design iterations.

  1. Global climate modeling of Saturn's atmosphere: fast and accurate radiative transfer and exploration of seasonal variability

    NASA Astrophysics Data System (ADS)

    Guerlet, Sandrine; Spiga, A.; Sylvestre, M.; Fouchet, T.; Millour, E.; Wordsworth, R.; Leconte, J.; Forget, F.

    2013-10-01

    Recent observations of Saturn’s stratospheric thermal structure and composition revealed new phenomena: an equatorial oscillation in temperature, reminiscent of the Earth's Quasi-Biennal Oscillation ; strong meridional contrasts of hydrocarbons ; a warm “beacon” associated with the powerful 2010 storm. Those signatures cannot be reproduced by 1D photochemical and radiative models and suggest that atmospheric dynamics plays a key role. This motivated us to develop a complete 3D General Circulation Model (GCM) for Saturn, based on the LMDz hydrodynamical core, to explore the circulation, seasonal variability, and wave activity in Saturn's atmosphere. In order to closely reproduce Saturn's radiative forcing, a particular emphasis was put in obtaining fast and accurate radiative transfer calculations. Our radiative model uses correlated-k distributions and spectral discretization tailored for Saturn's atmosphere. We include internal heat flux, ring shadowing and aerosols. We will report on the sensitivity of the model to spectral discretization, spectroscopic databases, and aerosol scenarios (varying particle sizes, opacities and vertical structures). We will also discuss the radiative effect of the ring shadowing on Saturn's atmosphere. We will present a comparison of temperature fields obtained with this new radiative equilibrium model to that inferred from Cassini/CIRS observations. In the troposphere, our model reproduces the observed temperature knee caused by heating at the top of the tropospheric aerosol layer. In the lower stratosphere (20mbar structure is governed by the dynamical equatorial oscillation. In the upper stratosphere (p<0.1 mbar), our modeled temperature is 5-10K too low compared to measurements. This suggests that processes other than radiative heating/cooling by trace

  2. PROMALS3D web server for accurate multiple protein sequence and structure alignments.

    PubMed

    Pei, Jimin; Tang, Ming; Grishin, Nick V

    2008-07-01

    Multiple sequence alignments are essential in computational sequence and structural analysis, with applications in homology detection, structure modeling, function prediction and phylogenetic analysis. We report PROMALS3D web server for constructing alignments for multiple protein sequences and/or structures using information from available 3D structures, database homologs and predicted secondary structures. PROMALS3D shows higher alignment accuracy than a number of other advanced methods. Input of PROMALS3D web server can be FASTA format protein sequences, PDB format protein structures and/or user-defined alignment constraints. The output page provides alignments with several formats, including a colored alignment augmented with useful information about sequence grouping, predicted secondary structures and consensus sequences. Intermediate results of sequence and structural database searches are also available. The PROMALS3D web server is available at: http://prodata.swmed.edu/promals3d/. PMID:18503087

  3. A parallel high-order accurate finite element nonlinear Stokes ice sheet model and benchmark experiments

    SciTech Connect

    Leng, Wei; Ju, Lili; Gunzburger, Max; Price, Stephen; Ringler, Todd

    2012-01-01

    The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.

  4. Getting a Picture that Is Both Accurate and Stable: Situation Models and Epistemic Validation

    ERIC Educational Resources Information Center

    Schroeder, Sascha; Richter, Tobias; Hoever, Inga

    2008-01-01

    Text comprehension entails the construction of a situation model that prepares individuals for situated action. In order to meet this function, situation model representations are required to be both accurate and stable. We propose a framework according to which comprehenders rely on epistemic validation to prevent inaccurate information from…

  5. Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations

    DOE PAGES

    Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; Dechant, Lawrence

    2016-05-31

    Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.

  6. Physical and Numerical Model Studies of Cross-flow Turbines Towards Accurate Parameterization in Array Simulations

    NASA Astrophysics Data System (ADS)

    Wosnik, M.; Bachant, P.

    2014-12-01

    Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of

  7. Accurate Universal Models for the Mass Accretion Histories and Concentrations of Dark Matter Halos

    NASA Astrophysics Data System (ADS)

    Zhao, D. H.; Jing, Y. P.; Mo, H. J.; Börner, G.

    2009-12-01

    A large amount of observations have constrained cosmological parameters and the initial density fluctuation spectrum to a very high accuracy. However, cosmological parameters change with time and the power index of the power spectrum dramatically varies with mass scale in the so-called concordance ΛCDM cosmology. Thus, any successful model for its structural evolution should work well simultaneously for various cosmological models and different power spectra. We use a large set of high-resolution N-body simulations of a variety of structure formation models (scale-free, standard CDM, open CDM, and ΛCDM) to study the mass accretion histories, the mass and redshift dependence of concentrations, and the concentration evolution histories of dark matter halos. We find that there is significant disagreement between the much-used empirical models in the literature and our simulations. Based on our simulation results, we find that the mass accretion rate of a halo is tightly correlated with a simple function of its mass, the redshift, parameters of the cosmology, and of the initial density fluctuation spectrum, which correctly disentangles the effects of all these factors and halo environments. We also find that the concentration of a halo is strongly correlated with the universe age when its progenitor on the mass accretion history first reaches 4% of its current mass. According to these correlations, we develop new empirical models for both the mass accretion histories and the concentration evolution histories of dark matter halos, and the latter can also be used to predict the mass and redshift dependence of halo concentrations. These models are accurate and universal: the same set of model parameters works well for different cosmological models and for halos of different masses at different redshifts, and in the ΛCDM case the model predictions match the simulation results very well even though halo mass is traced to about 0.0005 times the final mass, when

  8. ACCURATE UNIVERSAL MODELS FOR THE MASS ACCRETION HISTORIES AND CONCENTRATIONS OF DARK MATTER HALOS

    SciTech Connect

    Zhao, D. H.; Jing, Y. P.; Mo, H. J.; Boerner, G.

    2009-12-10

    A large amount of observations have constrained cosmological parameters and the initial density fluctuation spectrum to a very high accuracy. However, cosmological parameters change with time and the power index of the power spectrum dramatically varies with mass scale in the so-called concordance LAMBDACDM cosmology. Thus, any successful model for its structural evolution should work well simultaneously for various cosmological models and different power spectra. We use a large set of high-resolution N-body simulations of a variety of structure formation models (scale-free, standard CDM, open CDM, and LAMBDACDM) to study the mass accretion histories, the mass and redshift dependence of concentrations, and the concentration evolution histories of dark matter halos. We find that there is significant disagreement between the much-used empirical models in the literature and our simulations. Based on our simulation results, we find that the mass accretion rate of a halo is tightly correlated with a simple function of its mass, the redshift, parameters of the cosmology, and of the initial density fluctuation spectrum, which correctly disentangles the effects of all these factors and halo environments. We also find that the concentration of a halo is strongly correlated with the universe age when its progenitor on the mass accretion history first reaches 4% of its current mass. According to these correlations, we develop new empirical models for both the mass accretion histories and the concentration evolution histories of dark matter halos, and the latter can also be used to predict the mass and redshift dependence of halo concentrations. These models are accurate and universal: the same set of model parameters works well for different cosmological models and for halos of different masses at different redshifts, and in the LAMBDACDM case the model predictions match the simulation results very well even though halo mass is traced to about 0.0005 times the final mass

  9. Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue

    SciTech Connect

    Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William

    2008-01-01

    In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.

  10. Fuzzy Reasoning to More Accurately Determine Void Areas on Optical Micrographs of Composite Structures

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Tate, Lanetra C.; Wright, M. Clara; Caraccio, Anne

    2013-01-01

    Accomplishing the best-performing composite matrix (resin) requires that not only the processing method but also the cure cycle generate low-void-content structures. If voids are present, the performance of the composite matrix will be significantly reduced. This is usually noticed by significant reductions in matrix-dominated properties, such as compression and shear strength. Voids in composite materials are areas that are absent of the composite components: matrix and fibers. The characteristics of the voids and their accurate estimation are critical to determine for high performance composite structures. One widely used method of performing void analysis on a composite structure sample is acquiring optical micrographs or Scanning Electron Microscope (SEM) images of lateral sides of the sample and retrieving the void areas within the micrographs/images using an image analysis technique. Segmentation for the retrieval and subsequent computation of void areas within the micrographs/images is challenging as the gray-scaled values of the void areas are close to the gray-scaled values of the matrix leading to the need of manually performing the segmentation based on the histogram of the micrographs/images to retrieve the void areas. The use of an algorithm developed by NASA and based on Fuzzy Reasoning (FR) proved to overcome the difficulty of suitably differentiate void and matrix image areas with similar gray-scaled values leading not only to a more accurate estimation of void areas on composite matrix micrographs but also to a faster void analysis process as the algorithm is fully autonomous.

  11. Fuzzy Reasoning to More Accurately Determine Void Areas on Optical Micrographs of Composite Structures

    NASA Astrophysics Data System (ADS)

    Dominguez, Jesus A.; Tate, Lanetra C.; Wright, M. Clara; Caraccio, Anne

    2013-12-01

    Accomplishing the best-performing composite matrix (resin) requires that not only the processing method but also the cure cycle generate low-void-content structures. If voids are present, the performance of the composite matrix will be significantly reduced. This is usually noticed by significant reductions in matrix-dominated properties, such as compression and shear strength. Voids in composite materials are areas that are absent of the composite components: matrix and fibers. The characteristics of the voids and their accurate estimation are critical to determine for high performance composite structures. One widely used method of performing void analysis on a composite structure sample is acquiring optical micrographs or Scanning Electron Microscope (SEM) images of lateral sides of the sample and retrieving the void areas within the micrographs/images using an image analysis technique. Segmentation for the retrieval and subsequent computation of void areas within the micrographs/images is challenging as the gray-scaled values of the void areas are close to the gray-scaled values of the matrix leading to the need of manually performing the segmentation based on the histogram of the micrographs/images to retrieve the void areas. The use of an algorithm developed by NASA and based on Fuzzy Reasoning (FR) proved to overcome the difficulty of suitably differentiate void and matrix image areas with similar gray-scaled values leading not only to a more accurate estimation of void areas on composite matrix micrographs but also to a faster void analysis process as the algorithm is fully autonomous.

  12. Accurate FDTD modelling for dispersive media using rational function and particle swarm optimisation

    NASA Astrophysics Data System (ADS)

    Chung, Haejun; Ha, Sang-Gyu; Choi, Jaehoon; Jung, Kyung-Young

    2015-07-01

    This article presents an accurate finite-difference time domain (FDTD) dispersive modelling suitable for complex dispersive media. A quadratic complex rational function (QCRF) is used to characterise their dispersive relations. To obtain accurate coefficients of QCRF, in this work, we use an analytical approach and a particle swarm optimisation (PSO) simultaneously. In specific, an analytical approach is used to obtain the QCRF matrix-solving equation and PSO is applied to adjust a weighting function of this equation. Numerical examples are used to illustrate the validity of the proposed FDTD dispersion model.

  13. Accurate airway segmentation based on intensity structure analysis and graph-cut

    NASA Astrophysics Data System (ADS)

    Meng, Qier; Kitsaka, Takayuki; Nimura, Yukitaka; Oda, Masahiro; Mori, Kensaku

    2016-03-01

    This paper presents a novel airway segmentation method based on intensity structure analysis and graph-cut. Airway segmentation is an important step in analyzing chest CT volumes for computerized lung cancer detection, emphysema diagnosis, asthma diagnosis, and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3-D airway tree structure from a CT volume is quite challenging. Several researchers have proposed automated algorithms basically based on region growing and machine learning techniques. However these methods failed to detect the peripheral bronchi branches. They caused a large amount of leakage. This paper presents a novel approach that permits more accurate extraction of complex bronchial airway region. Our method are composed of three steps. First, the Hessian analysis is utilized for enhancing the line-like structure in CT volumes, then a multiscale cavity-enhancement filter is employed to detect the cavity-like structure from the previous enhanced result. In the second step, we utilize the support vector machine (SVM) to construct a classifier for removing the FP regions generated. Finally, the graph-cut algorithm is utilized to connect all of the candidate voxels to form an integrated airway tree. We applied this method to sixteen cases of 3D chest CT volumes. The results showed that the branch detection rate of this method can reach about 77.7% without leaking into the lung parenchyma areas.

  14. Accurate modeling of high-repetition rate ultrashort pulse amplification in optical fibers

    NASA Astrophysics Data System (ADS)

    Lindberg, Robert; Zeil, Peter; Malmström, Mikael; Laurell, Fredrik; Pasiskevicius, Valdas

    2016-10-01

    A numerical model for amplification of ultrashort pulses with high repetition rates in fiber amplifiers is presented. The pulse propagation is modeled by jointly solving the steady-state rate equations and the generalized nonlinear Schrödinger equation, which allows accurate treatment of nonlinear and dispersive effects whilst considering arbitrary spatial and spectral gain dependencies. Comparison of data acquired by using the developed model and experimental results prove to be in good agreement.

  15. Accurate modeling of high-repetition rate ultrashort pulse amplification in optical fibers

    PubMed Central

    Lindberg, Robert; Zeil, Peter; Malmström, Mikael; Laurell, Fredrik; Pasiskevicius, Valdas

    2016-01-01

    A numerical model for amplification of ultrashort pulses with high repetition rates in fiber amplifiers is presented. The pulse propagation is modeled by jointly solving the steady-state rate equations and the generalized nonlinear Schrödinger equation, which allows accurate treatment of nonlinear and dispersive effects whilst considering arbitrary spatial and spectral gain dependencies. Comparison of data acquired by using the developed model and experimental results prove to be in good agreement. PMID:27713496

  16. Accurate electronic-structure description of Mn complexes: a GGA+U approach

    NASA Astrophysics Data System (ADS)

    Li, Elise Y.; Kulik, Heather; Marzari, Nicola

    2008-03-01

    Conventional density-functional approach often fail in offering an accurate description of the spin-resolved energetics in transition metals complexes. We will focus here on Mn complexes, where many aspects of the molecular structure and the reaction mechanisms are still unresolved - most notably in the oxygen-evolving complex (OEC) of photosystem II and the manganese catalase (MC). We apply a self-consistent GGA + U approach [1], originally designed within the DFT framework for the treatment of strongly correlated materials, to describe the geometry, the electronic and the magnetic properties of various manganese oxide complexes, finding very good agreement with higher-order ab-initio calculations. In particular, the different oxidation states of dinuclear systems containing the [Mn2O2]^n+ (n= 2, 3, 4) core are investigated, in order to mimic the basic face unit of the OEC complex. [1]. H. J. Kulik, M. Cococcioni, D. A. Scherlis, N. Marzari, Phys. Rev. Lett., 2006, 97, 103001

  17. Protein structure modeling with MODELLER.

    PubMed

    Webb, Benjamin; Sali, Andrej

    2014-01-01

    Genome sequencing projects have resulted in a rapid increase in the number of known protein sequences. In contrast, only about one-hundredth of these sequences have been characterized at atomic resolution using experimental structure determination methods. Computational protein structure modeling techniques have the potential to bridge this sequence-structure gap. In this chapter, we present an example that illustrates the use of MODELLER to construct a comparative model for a protein with unknown structure. Automation of a similar protocol has resulted in models of useful accuracy for domains in more than half of all known protein sequences.

  18. A survey of factors contributing to accurate theoretical predictions of atomization energies and molecular structures

    NASA Astrophysics Data System (ADS)

    Feller, David; Peterson, Kirk A.; Dixon, David A.

    2008-11-01

    High level electronic structure predictions of thermochemical properties and molecular structure are capable of accuracy rivaling the very best experimental measurements as a result of rapid advances in hardware, software, and methodology. Despite the progress, real world limitations require practical approaches designed for handling general chemical systems that rely on composite strategies in which a single, intractable calculation is replaced by a series of smaller calculations. As typically implemented, these approaches produce a final, or "best," estimate that is constructed from one major component, fine-tuned by multiple corrections that are assumed to be additive. Though individually much smaller than the original, unmanageable computational problem, these corrections are nonetheless extremely costly. This study presents a survey of the widely varying magnitude of the most important components contributing to the atomization energies and structures of 106 small molecules. It combines large Gaussian basis sets and coupled cluster theory up to quadruple excitations for all systems. In selected cases, the effects of quintuple excitations and/or full configuration interaction were also considered. The availability of reliable experimental data for most of the molecules permits an expanded statistical analysis of the accuracy of the approach. In cases where reliable experimental information is currently unavailable, the present results are expected to provide some of the most accurate benchmark values available.

  19. Built-in templates speed up process for making accurate models

    NASA Technical Reports Server (NTRS)

    1964-01-01

    From accurate scale drawings of a model, photographic negatives of the cross sections are printed on thin sheets of aluminum. These cross-section images are cut out and mounted, and mahogany blocks placed between them. The wood can be worked down using the aluminum as a built-in template.

  20. A Simple and Accurate Analysis of Conductivity Loss in Millimeter-Wave Helical Slow-Wave Structures

    NASA Astrophysics Data System (ADS)

    Datta, S. K.; Kumar, Lalit; Basu, B. N.

    2009-04-01

    Electromagnetic field analysis of a helix slow-wave structure was carried out and a closed form expression was derived for the inductance per unit length of the transmission-line equivalent circuit of the structure, taking into account the actual helix tape dimensions and surface current on the helix over the actual metallic area of the tape. The expression of the inductance per unit length, thus obtained, was used for estimating the increment in the inductance per unit length caused due to penetration of the magnetic flux into the conducting surfaces following Wheeler’s incremental inductance rule, which was subsequently interpreted for the attenuation constant of the propagating structure. The analysis was computationally simple and accurate, and accrues the accuracy of 3D electromagnetic analysis by allowing the use of dispersion characteristics obtainable from any standard electromagnetic modeling. The approach was benchmarked against measurement for two practical structures, and excellent agreement was observed. The analysis was subsequently applied to demonstrate the effects of conductivity on the attenuation constant of a typical broadband millimeter-wave helical slow-wave structure with respect to helix materials and copper plating on the helix, surface finish of the helix, dielectric loading effect and effect of high temperature operation - a comparative study of various such aspects are covered.

  1. Accurate molecular structure and spectroscopic properties for nucleobases: A combined computational - microwave investigation of 2-thiouracil as a case study

    PubMed Central

    Puzzarini, Cristina; Biczysko, Malgorzata; Barone, Vincenzo; Peña, Isabel; Cabezas, Carlos; Alonso, José L.

    2015-01-01

    The computational composite scheme purposely set up for accurately describing the electronic structure and spectroscopic properties of small biomolecules has been applied to the first study of the rotational spectrum of 2-thiouracil. The experimental investigation was made possible thanks to the combination of the laser ablation technique with Fourier Transform Microwave spectrometers. The joint experimental – computational study allowed us to determine accurate molecular structure and spectroscopic properties for the title molecule, but more important, it demonstrates a reliable approach for the accurate investigation of isolated small biomolecules. PMID:24002739

  2. Development of modified cable models to simulate accurate neuronal active behaviors.

    PubMed

    Elbasiouny, Sherif M

    2014-12-01

    In large network and single three-dimensional (3-D) neuron simulations, high computing speed dictates using reduced cable models to simulate neuronal firing behaviors. However, these models are unwarranted under active conditions and lack accurate representation of dendritic active conductances that greatly shape neuronal firing. Here, realistic 3-D (R3D) models (which contain full anatomical details of dendrites) of spinal motoneurons were systematically compared with their reduced single unbranched cable (SUC, which reduces the dendrites to a single electrically equivalent cable) counterpart under passive and active conditions. The SUC models matched the R3D model's passive properties but failed to match key active properties, especially active behaviors originating from dendrites. For instance, persistent inward currents (PIC) hysteresis, frequency-current (FI) relationship secondary range slope, firing hysteresis, plateau potential partial deactivation, staircase currents, synaptic current transfer ratio, and regional FI relationships were not accurately reproduced by the SUC models. The dendritic morphology oversimplification and lack of dendritic active conductances spatial segregation in the SUC models caused significant underestimation of those behaviors. Next, SUC models were modified by adding key branching features in an attempt to restore their active behaviors. The addition of primary dendritic branching only partially restored some active behaviors, whereas the addition of secondary dendritic branching restored most behaviors. Importantly, the proposed modified models successfully replicated the active properties without sacrificing model simplicity, making them attractive candidates for running R3D single neuron and network simulations with accurate firing behaviors. The present results indicate that using reduced models to examine PIC behaviors in spinal motoneurons is unwarranted.

  3. Fast, Accurate RF Propagation Modeling and Simulation Tool for Highly Cluttered Environments

    SciTech Connect

    Kuruganti, Phani Teja

    2007-01-01

    As network centric warfare and distributed operations paradigms unfold, there is a need for robust, fast wireless network deployment tools. These tools must take into consideration the terrain of the operating theater, and facilitate specific modeling of end to end network performance based on accurate RF propagation predictions. It is well known that empirical models can not provide accurate, site specific predictions of radio channel behavior. In this paper an event-driven wave propagation simulation is proposed as a computationally efficient technique for predicting critical propagation characteristics of RF signals in cluttered environments. Convincing validation and simulator performance studies confirm the suitability of this method for indoor and urban area RF channel modeling. By integrating our RF propagation prediction tool, RCSIM, with popular packetlevel network simulators, we are able to construct an end to end network analysis tool for wireless networks operated in built-up urban areas.

  4. Particle Image Velocimetry Measurements in an Anatomically-Accurate Scaled Model of the Mammalian Nasal Cavity

    NASA Astrophysics Data System (ADS)

    Rumple, Christopher; Krane, Michael; Richter, Joseph; Craven, Brent

    2013-11-01

    The mammalian nose is a multi-purpose organ that houses a convoluted airway labyrinth responsible for respiratory air conditioning, filtering of environmental contaminants, and chemical sensing. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of respiratory airflow and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture an anatomically-accurate transparent model for stereoscopic particle image velocimetry (SPIV) measurements. Challenges in the design and manufacture of an index-matched anatomical model are addressed. PIV measurements are presented, which are used to validate concurrent computational fluid dynamics (CFD) simulations of mammalian nasal airflow. Supported by the National Science Foundation.

  5. Accurate identification of waveform of evoked potentials by component decomposition using discrete cosine transform modeling.

    PubMed

    Bai, O; Nakamura, M; Kanda, M; Nagamine, T; Shibasaki, H

    2001-11-01

    This study introduces a method for accurate identification of the waveform of the evoked potentials by decomposing the component responses. The decomposition was achieved by zero-pole modeling of the evoked potentials in the discrete cosine transform (DCT) domain. It was found that the DCT coefficients of a component response in the evoked potentials could be modeled sufficiently by a second order transfer function in the DCT domain. The decomposition of the component responses was approached by using partial expansion of the estimated model for the evoked potentials, and the effectiveness of the decomposition method was evaluated both qualitatively and quantitatively. Because of the overlap of the different component responses, the proposed method enables an accurate identification of the evoked potentials, which is useful for clinical and neurophysiological investigations.

  6. Accurate coarse-grained models for mixtures of colloids and linear polymers under good-solvent conditions

    SciTech Connect

    D’Adamo, Giuseppe; Pelissetto, Andrea; Pierleoni, Carlo

    2014-12-28

    A coarse-graining strategy, previously developed for polymer solutions, is extended here to mixtures of linear polymers and hard-sphere colloids. In this approach, groups of monomers are mapped onto a single pseudoatom (a blob) and the effective blob-blob interactions are obtained by requiring the model to reproduce some large-scale structural properties in the zero-density limit. We show that an accurate parametrization of the polymer-colloid interactions is obtained by simply introducing pair potentials between blobs and colloids. For the coarse-grained (CG) model in which polymers are modelled as four-blob chains (tetramers), the pair potentials are determined by means of the iterative Boltzmann inversion scheme, taking full-monomer (FM) pair correlation functions at zero-density as targets. For a larger number n of blobs, pair potentials are determined by using a simple transferability assumption based on the polymer self-similarity. We validate the model by comparing its predictions with full-monomer results for the interfacial properties of polymer solutions in the presence of a single colloid and for thermodynamic and structural properties in the homogeneous phase at finite polymer and colloid density. The tetramer model is quite accurate for q ≲ 1 (q=R{sup ^}{sub g}/R{sub c}, where R{sup ^}{sub g} is the zero-density polymer radius of gyration and R{sub c} is the colloid radius) and reasonably good also for q = 2. For q = 2, an accurate coarse-grained description is obtained by using the n = 10 blob model. We also compare our results with those obtained by using single-blob models with state-dependent potentials.

  7. The S-model: A highly accurate MOST model for CAD

    NASA Astrophysics Data System (ADS)

    Satter, J. H.

    1986-09-01

    A new MOST model which combines simplicity and a logical structure with a high accuracy of only 0.5-4.5% is presented. The model is suited for enhancement and depletion devices with either large or small dimensions. It includes the effects of scattering and carrier-velocity saturation as well as the influence of the intrinsic source and drain series resistance. The decrease of the drain current due to substrate bias is incorporated too. The model is in the first place intended for digital purposes. All necessary quantities are calculated in a straightforward manner without iteration. An almost entirely new way of determining the parameters is described and a new cluster parameter is introduced, which is responsible for the high accuracy of the model. The total number of parameters is 7. A still simpler β expression is derived, which is suitable for only one value of the substrate bias and contains only three parameters, while maintaining the accuracy. The way in which the parameters are determined is readily suited for automatic measurement. A simple linear regression procedure programmed in the computer, which controls the measurements, produces the parameter values.

  8. Accurate path integration in continuous attractor network models of grid cells.

    PubMed

    Burak, Yoram; Fiete, Ila R

    2009-02-01

    Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other. PMID:19229307

  9. Can phenological models predict tree phenology accurately under climate change conditions?

    NASA Astrophysics Data System (ADS)

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2014-05-01

    The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay

  10. Straightforward and accurate technique for post-coupler stabilization in drift tube linac structures

    NASA Astrophysics Data System (ADS)

    Khalvati, Mohammad Reza; Ramberger, Suitbert

    2016-04-01

    The axial electric field of Alvarez drift tube linacs (DTLs) is known to be susceptible to variations due to static and dynamic effects like manufacturing tolerances and beam loading. Post-couplers are used to stabilize the accelerating fields of DTLs against tuning errors. Tilt sensitivity and its slope have been introduced as measures for the stability right from the invention of post-couplers but since then the actual stabilization has mostly been done by tedious iteration. In the present article, the local tilt-sensitivity slope TSn' is established as the principal measure for stabilization instead of tilt sensitivity or some visual slope, and its significance is developed on the basis of an equivalent-circuit diagram of the DTL. Experimental and 3D simulation results are used to analyze its behavior and to define a technique for stabilization that allows finding the best post-coupler settings with just four tilt-sensitivity measurements. CERN's Linac4 DTL Tank 2 and Tank 3 have been stabilized successfully using this technique. The final tilt-sensitivity error has been reduced from ±100 %/MHz down to ±3 %/MHz for Tank 2 and down to ±1 %/MHz for Tank 3. Finally, an accurate procedure for tuning the structure using slug tuners is discussed.

  11. Coding exon-structure aware realigner (CESAR) utilizes genome alignments for accurate comparative gene annotation.

    PubMed

    Sharma, Virag; Elghafari, Anas; Hiller, Michael

    2016-06-20

    Identifying coding genes is an essential step in genome annotation. Here, we utilize existing whole genome alignments to detect conserved coding exons and then map gene annotations from one genome to many aligned genomes. We show that genome alignments contain thousands of spurious frameshifts and splice site mutations in exons that are truly conserved. To overcome these limitations, we have developed CESAR (Coding Exon-Structure Aware Realigner) that realigns coding exons, while considering reading frame and splice sites of each exon. CESAR effectively avoids spurious frameshifts in conserved genes and detects 91% of shifted splice sites. This results in the identification of thousands of additional conserved exons and 99% of the exons that lack inactivating mutations match real exons. Finally, to demonstrate the potential of using CESAR for comparative gene annotation, we applied it to 188 788 exons of 19 865 human genes to annotate human genes in 99 other vertebrates. These comparative gene annotations are available as a resource (http://bds.mpi-cbg.de/hillerlab/CESAR/). CESAR (https://github.com/hillerlab/CESAR/) can readily be applied to other alignments to accurately annotate coding genes in many other vertebrate and invertebrate genomes. PMID:27016733

  12. Final Report for "Accurate Numerical Models of the Secondary Electron Yield from Grazing-incidence Collisions".

    SciTech Connect

    Seth A Veitzer

    2008-10-21

    Effects of stray electrons are a main factor limiting performance of many accelerators. Because heavy-ion fusion (HIF) accelerators will operate in regimes of higher current and with walls much closer to the beam than accelerators operating today, stray electrons might have a large, detrimental effect on the performance of an HIF accelerator. A primary source of stray electrons is electrons generated when halo ions strike the beam pipe walls. There is some research on these types of secondary electrons for the HIF community to draw upon, but this work is missing one crucial ingredient: the effect of grazing incidence. The overall goal of this project was to develop the numerical tools necessary to accurately model the effect of grazing incidence on the behavior of halo ions in a HIF accelerator, and further, to provide accurate models of heavy ion stopping powers with applications to ICF, WDM, and HEDP experiments.

  13. Accurate and efficient halo-based galaxy clustering modelling with simulations

    NASA Astrophysics Data System (ADS)

    Zheng, Zheng; Guo, Hong

    2016-06-01

    Small- and intermediate-scale galaxy clustering can be used to establish the galaxy-halo connection to study galaxy formation and evolution and to tighten constraints on cosmological parameters. With the increasing precision of galaxy clustering measurements from ongoing and forthcoming large galaxy surveys, accurate models are required to interpret the data and extract relevant information. We introduce a method based on high-resolution N-body simulations to accurately and efficiently model the galaxy two-point correlation functions (2PCFs) in projected and redshift spaces. The basic idea is to tabulate all information of haloes in the simulations necessary for computing the galaxy 2PCFs within the framework of halo occupation distribution or conditional luminosity function. It is equivalent to populating galaxies to dark matter haloes and using the mock 2PCF measurements as the model predictions. Besides the accurate 2PCF calculations, the method is also fast and therefore enables an efficient exploration of the parameter space. As an example of the method, we decompose the redshift-space galaxy 2PCF into different components based on the type of galaxy pairs and show the redshift-space distortion effect in each component. The generalizations and limitations of the method are discussed.

  14. An analytic model for accurate spring constant calibration of rectangular atomic force microscope cantilevers.

    PubMed

    Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang

    2015-10-29

    Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.

  15. An analytic model for accurate spring constant calibration of rectangular atomic force microscope cantilevers

    PubMed Central

    Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang

    2015-01-01

    Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson’s ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers. PMID:26510769

  16. Monte Carlo modeling provides accurate calibration factors for radionuclide activity meters.

    PubMed

    Zagni, F; Cicoria, G; Lucconi, G; Infantino, A; Lodi, F; Marengo, M

    2014-12-01

    Accurate determination of calibration factors for radionuclide activity meters is crucial for quantitative studies and in the optimization step of radiation protection, as these detectors are widespread in radiopharmacy and nuclear medicine facilities. In this work we developed the Monte Carlo model of a widely used activity meter, using the Geant4 simulation toolkit. More precisely the "PENELOPE" EM physics models were employed. The model was validated by means of several certified sources, traceable to primary activity standards, and other sources locally standardized with spectrometry measurements, plus other experimental tests. Great care was taken in order to accurately reproduce the geometrical details of the gas chamber and the activity sources, each of which is different in shape and enclosed in a unique container. Both relative calibration factors and ionization current obtained with simulations were compared against experimental measurements; further tests were carried out, such as the comparison of the relative response of the chamber for a source placed at different positions. The results showed a satisfactory level of accuracy in the energy range of interest, with the discrepancies lower than 4% for all the tested parameters. This shows that an accurate Monte Carlo modeling of this type of detector is feasible using the low-energy physics models embedded in Geant4. The obtained Monte Carlo model establishes a powerful tool for first instance determination of new calibration factors for non-standard radionuclides, for custom containers, when a reference source is not available. Moreover, the model provides an experimental setup for further research and optimization with regards to materials and geometrical details of the measuring setup, such as the ionization chamber itself or the containers configuration.

  17. Can phenological models predict tree phenology accurately in the future? The unrevealed hurdle of endodormancy break.

    PubMed

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2016-10-01

    The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future. PMID:27272707

  18. Can phenological models predict tree phenology accurately in the future? The unrevealed hurdle of endodormancy break.

    PubMed

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2016-10-01

    The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future.

  19. SCAN: An Efficient Density Functional Yielding Accurate Structures and Energies of Diversely-Bonded Materials

    NASA Astrophysics Data System (ADS)

    Sun, Jianwei

    The accuracy and computational efficiency of the widely used Kohn-Sham density functional theory (DFT) are limited by the approximation to its exchange-correlation energy Exc. The earliest local density approximation (LDA) overestimates the strengths of all bonds near equilibrium (even the vdW bonds). By adding the electron density gradient to model Exc, generalized gradient approximations (GGAs) generally soften the bonds to give robust and overall more accurate descriptions, except for the vdW interaction which is largely lost. Further improvement for covalent, ionic, and hydrogen bonds can be obtained by the computationally more expensive hybrid GGAs, which mix GGAs with the nonlocal exact exchange. Meta-GGAs are still semilocal in computation and thus efficient. Compared to GGAs, they add the kinetic energy density that enables them to recognize and accordingly treat different bonds, which no LDA or GGA can. We show here that the recently developed non-empirical strongly constrained and appropriately normed (SCAN) meta-GGA improves significantly over LDA and the standard Perdew-Burke-Ernzerhof GGA for geometries and energies of diversely-bonded materials (including covalent, metallic, ionic, hydrogen, and vdW bonds) at comparable efficiency. Often SCAN matches or improves upon the accuracy of a hybrid functional, at almost-GGA cost. This work has been supported by NSF under DMR-1305135 and CNS-09-58854, and by DOE BES EFRC CCDM under DE-SC0012575.

  20. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed.

  1. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. PMID:26121186

  2. Time resolved diffuse optical spectroscopy with geometrically accurate models for bulk parameter recovery

    PubMed Central

    Guggenheim, James A.; Bargigia, Ilaria; Farina, Andrea; Pifferi, Antonio; Dehghani, Hamid

    2016-01-01

    A novel straightforward, accessible and efficient approach is presented for performing hyperspectral time-domain diffuse optical spectroscopy to determine the optical properties of samples accurately using geometry specific models. To allow bulk parameter recovery from measured spectra, a set of libraries based on a numerical model of the domain being investigated is developed as opposed to the conventional approach of using an analytical semi-infinite slab approximation, which is known and shown to introduce boundary effects. Results demonstrate that the method improves the accuracy of derived spectrally varying optical properties over the use of the semi-infinite approximation. PMID:27699137

  3. Accurate Analytic Results for the Steady State Distribution of the Eigen Model

    NASA Astrophysics Data System (ADS)

    Huang, Guan-Rong; Saakian, David B.; Hu, Chin-Kun

    2016-04-01

    Eigen model of molecular evolution is popular in studying complex biological and biomedical systems. Using the Hamilton-Jacobi equation method, we have calculated analytic equations for the steady state distribution of the Eigen model with a relative accuracy of O(1/N), where N is the length of genome. Our results can be applied for the case of small genome length N, as well as the cases where the direct numerics can not give accurate result, e.g., the tail of distribution.

  4. Time resolved diffuse optical spectroscopy with geometrically accurate models for bulk parameter recovery

    PubMed Central

    Guggenheim, James A.; Bargigia, Ilaria; Farina, Andrea; Pifferi, Antonio; Dehghani, Hamid

    2016-01-01

    A novel straightforward, accessible and efficient approach is presented for performing hyperspectral time-domain diffuse optical spectroscopy to determine the optical properties of samples accurately using geometry specific models. To allow bulk parameter recovery from measured spectra, a set of libraries based on a numerical model of the domain being investigated is developed as opposed to the conventional approach of using an analytical semi-infinite slab approximation, which is known and shown to introduce boundary effects. Results demonstrate that the method improves the accuracy of derived spectrally varying optical properties over the use of the semi-infinite approximation.

  5. Structural modeling of aircraft tires

    NASA Technical Reports Server (NTRS)

    Clark, S. K.; Dodge, R. N.; Lackey, J. I.; Nybakken, G. H.

    1973-01-01

    A theoretical and experimental investigation of the feasibility of determining the mechanical properties of aircraft tires from small-scale model tires was accomplished. The theoretical results indicate that the macroscopic static and dynamic mechanical properties of aircraft tires can be accurately determined from the scale model tires although the microscopic and thermal properties of aircraft tires can not. The experimental investigation was conducted on a scale model of a 40 x 12, 14 ply rated, type 7 aircraft tire with a scaling factor of 8.65. The experimental results indicate that the scale model tire exhibited the same static mechanical properties as the prototype tire when compared on a dimensionless basis. The structural modeling concept discussed in this report is believed to be exact for mechanical properties of aircraft tires under static, rolling, and transient conditions.

  6. Combining Evolutionary Information and an Iterative Sampling Strategy for Accurate Protein Structure Prediction.

    PubMed

    Braun, Tatjana; Koehler Leman, Julia; Lange, Oliver F

    2015-12-01

    Recent work has shown that the accuracy of ab initio structure prediction can be significantly improved by integrating evolutionary information in form of intra-protein residue-residue contacts. Following this seminal result, much effort is put into the improvement of contact predictions. However, there is also a substantial need to develop structure prediction protocols tailored to the type of restraints gained by contact predictions. Here, we present a structure prediction protocol that combines evolutionary information with the resolution-adapted structural recombination approach of Rosetta, called RASREC. Compared to the classic Rosetta ab initio protocol, RASREC achieves improved sampling, better convergence and higher robustness against incorrect distance restraints, making it the ideal sampling strategy for the stated problem. To demonstrate the accuracy of our protocol, we tested the approach on a diverse set of 28 globular proteins. Our method is able to converge for 26 out of the 28 targets and improves the average TM-score of the entire benchmark set from 0.55 to 0.72 when compared to the top ranked models obtained by the EVFold web server using identical contact predictions. Using a smaller benchmark, we furthermore show that the prediction accuracy of our method is only slightly reduced when the contact prediction accuracy is comparatively low. This observation is of special interest for protein sequences that only have a limited number of homologs.

  7. Accurate halo-model matter power spectra with dark energy, massive neutrinos and modified gravitational forces

    NASA Astrophysics Data System (ADS)

    Mead, A. J.; Heymans, C.; Lombriser, L.; Peacock, J. A.; Steele, O. I.; Winther, H. A.

    2016-06-01

    We present an accurate non-linear matter power spectrum prediction scheme for a variety of extensions to the standard cosmological paradigm, which uses the tuned halo model previously developed in Mead et al. We consider dark energy models that are both minimally and non-minimally coupled, massive neutrinos and modified gravitational forces with chameleon and Vainshtein screening mechanisms. In all cases, we compare halo-model power spectra to measurements from high-resolution simulations. We show that the tuned halo-model method can predict the non-linear matter power spectrum measured from simulations of parametrized w(a) dark energy models at the few per cent level for k < 10 h Mpc-1, and we present theoretically motivated extensions to cover non-minimally coupled scalar fields, massive neutrinos and Vainshtein screened modified gravity models that result in few per cent accurate power spectra for k < 10 h Mpc-1. For chameleon screened models, we achieve only 10 per cent accuracy for the same range of scales. Finally, we use our halo model to investigate degeneracies between different extensions to the standard cosmological model, finding that the impact of baryonic feedback on the non-linear matter power spectrum can be considered independently of modified gravity or massive neutrino extensions. In contrast, considering the impact of modified gravity and massive neutrinos independently results in biased estimates of power at the level of 5 per cent at scales k > 0.5 h Mpc-1. An updated version of our publicly available HMCODE can be found at https://github.com/alexander-mead/hmcode.

  8. An accurate simulation model for single-photon avalanche diodes including important statistical effects

    NASA Astrophysics Data System (ADS)

    Qiuyang, He; Yue, Xu; Feifei, Zhao

    2013-10-01

    An accurate and complete circuit simulation model for single-photon avalanche diodes (SPADs) is presented. The derived model is not only able to simulate the static DC and dynamic AC behaviors of an SPAD operating in Geiger-mode, but also can emulate the second breakdown and the forward bias behaviors. In particular, it considers important statistical effects, such as dark-counting and after-pulsing phenomena. The developed model is implemented using the Verilog-A description language and can be directly performed in commercial simulators such as Cadence Spectre. The Spectre simulation results give a very good agreement with the experimental results reported in the open literature. This model shows a high simulation accuracy and very fast simulation rate.

  9. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    SciTech Connect

    Song, Shoujun Ge, Lefei; Ma, Shaojie; Zhang, Man

    2014-04-15

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.

  10. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    NASA Astrophysics Data System (ADS)

    Song, Shoujun; Ge, Lefei; Ma, Shaojie; Zhang, Man

    2014-04-01

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.

  11. An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates

    PubMed Central

    Khan, Usman; Falconi, Christian

    2014-01-01

    Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214

  12. A multiscale red blood cell model with accurate mechanics, rheology, and dynamics.

    PubMed

    Fedosov, Dmitry A; Caswell, Bruce; Karniadakis, George Em

    2010-05-19

    Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary.

  13. A Multiscale Red Blood Cell Model with Accurate Mechanics, Rheology, and Dynamics

    PubMed Central

    Fedosov, Dmitry A.; Caswell, Bruce; Karniadakis, George Em

    2010-01-01

    Abstract Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary. PMID:20483330

  14. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    PubMed

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available.

  15. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    PubMed

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available. PMID:24472756

  16. Accurate verification of the conserved-vector-current and standard-model predictions

    SciTech Connect

    Sirlin, A.; Zucchini, R.

    1986-10-20

    An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.

  17. Particle Image Velocimetry Measurements in Anatomically-Accurate Models of the Mammalian Nasal Cavity

    NASA Astrophysics Data System (ADS)

    Rumple, C.; Richter, J.; Craven, B. A.; Krane, M.

    2012-11-01

    A summary of the research being carried out by our multidisciplinary team to better understand the form and function of the nose in different mammalian species that include humans, carnivores, ungulates, rodents, and marine animals will be presented. The mammalian nose houses a convoluted airway labyrinth, where two hallmark features of mammals occur, endothermy and olfaction. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of airflow and respiratory and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture transparent, anatomically-accurate models for stereo particle image velocimetry (SPIV) measurements of nasal airflow. Challenges in the design and manufacture of index-matched anatomical models are addressed and preliminary SPIV measurements are presented. Such measurements will constitute a validation database for concurrent computational fluid dynamics (CFD) simulations of mammalian respiration and olfaction. Supported by the National Science Foundation.

  18. Double cluster heads model for secure and accurate data fusion in wireless sensor networks.

    PubMed

    Fu, Jun-Song; Liu, Yun

    2015-01-19

    Secure and accurate data fusion is an important issue in wireless sensor networks (WSNs) and has been extensively researched in the literature. In this paper, by combining clustering techniques, reputation and trust systems, and data fusion algorithms, we propose a novel cluster-based data fusion model called Double Cluster Heads Model (DCHM) for secure and accurate data fusion in WSNs. Different from traditional clustering models in WSNs, two cluster heads are selected after clustering for each cluster based on the reputation and trust system and they perform data fusion independently of each other. Then, the results are sent to the base station where the dissimilarity coefficient is computed. If the dissimilarity coefficient of the two data fusion results exceeds the threshold preset by the users, the cluster heads will be added to blacklist, and the cluster heads must be reelected by the sensor nodes in a cluster. Meanwhile, feedback is sent from the base station to the reputation and trust system, which can help us to identify and delete the compromised sensor nodes in time. Through a series of extensive simulations, we found that the DCHM performed very well in data fusion security and accuracy.

  19. Fast and accurate computation of system matrix for area integral model-based algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua

    2014-11-01

    Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.

  20. Applying an accurate spherical model to gamma-ray burst afterglow observations

    NASA Astrophysics Data System (ADS)

    Leventis, K.; van der Horst, A. J.; van Eerten, H. J.; Wijers, R. A. M. J.

    2013-05-01

    We present results of model fits to afterglow data sets of GRB 970508, GRB 980703 and GRB 070125, characterized by long and broad-band coverage. The model assumes synchrotron radiation (including self-absorption) from a spherical adiabatic blast wave and consists of analytic flux prescriptions based on numerical results. For the first time it combines the accuracy of hydrodynamic simulations through different stages of the outflow dynamics with the flexibility of simple heuristic formulas. The prescriptions are especially geared towards accurate description of the dynamical transition of the outflow from relativistic to Newtonian velocities in an arbitrary power-law density environment. We show that the spherical model can accurately describe the data only in the case of GRB 970508, for which we find a circumburst medium density n ∝ r-2. We investigate in detail the implied spectra and physical parameters of that burst. For the microphysics we show evidence for equipartition between the fraction of energy density carried by relativistic electrons and magnetic field. We also find that for the blast wave to be adiabatic, the fraction of electrons accelerated at the shock has to be smaller than 1. We present best-fitting parameters for the afterglows of all three bursts, including uncertainties in the parameters of GRB 970508, and compare the inferred values to those obtained by different authors.

  1. Cumulative atomic multipole moments complement any atomic charge model to obtain more accurate electrostatic properties

    NASA Technical Reports Server (NTRS)

    Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.

    1992-01-01

    The quality of several atomic charge models based on different definitions has been analyzed using cumulative atomic multipole moments (CAMM). This formalism can generate higher atomic moments starting from any atomic charges, while preserving the corresponding molecular moments. The atomic charge contribution to the higher molecular moments, as well as to the electrostatic potentials, has been examined for CO and HCN molecules at several different levels of theory. The results clearly show that the electrostatic potential obtained from CAMM expansion is convergent up to R-5 term for all atomic charge models used. This illustrates that higher atomic moments can be used to supplement any atomic charge model to obtain more accurate description of electrostatic properties.

  2. The importance of accurate muscle modelling for biomechanical analyses: a case study with a lizard skull

    PubMed Central

    Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.

    2013-01-01

    Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944

  3. The importance of accurate muscle modelling for biomechanical analyses: a case study with a lizard skull.

    PubMed

    Gröning, Flora; Jones, Marc E H; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E; Fagan, Michael J

    2013-07-01

    Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944

  4. Digitalized accurate modeling of SPCB with multi-spiral surface based on CPC algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Yanhua; Gu, Lizhi

    2015-09-01

    The main methods of the existing multi-spiral surface geometry modeling include spatial analytic geometry algorithms, graphical method, interpolation and approximation algorithms. However, there are some shortcomings in these modeling methods, such as large amount of calculation, complex process, visible errors, and so on. The above methods have, to some extent, restricted the design and manufacture of the premium and high-precision products with spiral surface considerably. This paper introduces the concepts of the spatially parallel coupling with multi-spiral surface and spatially parallel coupling body. The typical geometry and topological features of each spiral surface forming the multi-spiral surface body are determined, by using the extraction principle of datum point cluster, the algorithm of coupling point cluster by removing singular point, and the "spatially parallel coupling" principle based on the non-uniform B-spline for each spiral surface. The orientation and quantitative relationships of datum point cluster and coupling point cluster in Euclidean space are determined accurately and in digital description and expression, coupling coalescence of the surfaces with multi-coupling point clusters under the Pro/E environment. The digitally accurate modeling of spatially parallel coupling body with multi-spiral surface is realized. The smooth and fairing processing is done to the three-blade end-milling cutter's end section area by applying the principle of spatially parallel coupling with multi-spiral surface, and the alternative entity model is processed in the four axis machining center after the end mill is disposed. And the algorithm is verified and then applied effectively to the transition area among the multi-spiral surface. The proposed model and algorithms may be used in design and manufacture of the multi-spiral surface body products, as well as in solving essentially the problems of considerable modeling errors in computer graphics and

  5. Accurate prediction of interfacial residues in two-domain proteins using evolutionary information: implications for three-dimensional modeling.

    PubMed

    Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy

    2014-07-01

    With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions.

  6. Towards an accurate understanding of UHMWPE visco-dynamic behaviour for numerical modelling of implants.

    PubMed

    Quinci, Federico; Dressler, Matthew; Strickland, Anthony M; Limbert, Georges

    2014-04-01

    Considerable progress has been made in understanding implant wear and developing numerical models to predict wear for new orthopaedic devices. However any model of wear could be improved through a more accurate representation of the biomaterial mechanics, including time-varying dynamic and inelastic behaviour such as viscosity and plastic deformation. In particular, most computational models of wear of UHMWPE implement a time-invariant version of Archard's law that links the volume of worn material to the contact pressure between the metal implant and the polymeric tibial insert. During in-vivo conditions, however, the contact area is a time-varying quantity and is therefore dependent upon the dynamic deformation response of the material. From this observation one can conclude that creep deformations of UHMWPE may be very important to consider when conducting computational wear analyses, in stark contrast to what can be found in the literature. In this study, different numerical modelling techniques are compared with experimental creep testing on a unicondylar knee replacement system in a physiologically representative context. Linear elastic, plastic and time-varying visco-dynamic models are benchmarked using literature data to predict contact deformations, pressures and areas. The aim of this study is to elucidate the contributions of viscoelastic and plastic effects on these surface quantities. It is concluded that creep deformations have a significant effect on the contact pressure measured (experiment) and calculated (computational models) at the surface of the UHMWPE unicondylar insert. The use of a purely elastoplastic constitutive model for UHMWPE lead to compressive deformations of the insert which are much smaller than those predicted by a creep-capturing viscoelastic model (and those measured experimentally). This shows again the importance of including creep behaviour into a constitutive model in order to predict the right level of surface deformation

  7. Towards an accurate understanding of UHMWPE visco-dynamic behaviour for numerical modelling of implants.

    PubMed

    Quinci, Federico; Dressler, Matthew; Strickland, Anthony M; Limbert, Georges

    2014-04-01

    Considerable progress has been made in understanding implant wear and developing numerical models to predict wear for new orthopaedic devices. However any model of wear could be improved through a more accurate representation of the biomaterial mechanics, including time-varying dynamic and inelastic behaviour such as viscosity and plastic deformation. In particular, most computational models of wear of UHMWPE implement a time-invariant version of Archard's law that links the volume of worn material to the contact pressure between the metal implant and the polymeric tibial insert. During in-vivo conditions, however, the contact area is a time-varying quantity and is therefore dependent upon the dynamic deformation response of the material. From this observation one can conclude that creep deformations of UHMWPE may be very important to consider when conducting computational wear analyses, in stark contrast to what can be found in the literature. In this study, different numerical modelling techniques are compared with experimental creep testing on a unicondylar knee replacement system in a physiologically representative context. Linear elastic, plastic and time-varying visco-dynamic models are benchmarked using literature data to predict contact deformations, pressures and areas. The aim of this study is to elucidate the contributions of viscoelastic and plastic effects on these surface quantities. It is concluded that creep deformations have a significant effect on the contact pressure measured (experiment) and calculated (computational models) at the surface of the UHMWPE unicondylar insert. The use of a purely elastoplastic constitutive model for UHMWPE lead to compressive deformations of the insert which are much smaller than those predicted by a creep-capturing viscoelastic model (and those measured experimentally). This shows again the importance of including creep behaviour into a constitutive model in order to predict the right level of surface deformation

  8. Accurate structure prediction of peptide–MHC complexes for identifying highly immunogenic antigens

    SciTech Connect

    Park, Min-Sun; Park, Sung Yong; Miller, Keith R.; Collins, Edward J.; Lee, Ha Youn

    2013-11-01

    Designing an optimal HIV-1 vaccine faces the challenge of identifying antigens that induce a broad immune capacity. One factor to control the breadth of T cell responses is the surface morphology of a peptide–MHC complex. Here, we present an in silico protocol for predicting peptide–MHC structure. A robust signature of a conformational transition was identified during all-atom molecular dynamics, which results in a model with high accuracy. A large test set was used in constructing our protocol and we went another step further using a blind test with a wild-type peptide and two highly immunogenic mutants, which predicted substantial conformational changes in both mutants. The center residues at position five of the analogs were configured to be accessible to solvent, forming a prominent surface, while the residue of the wild-type peptide was to point laterally toward the side of the binding cleft. We then experimentally determined the structures of the blind test set, using high resolution of X-ray crystallography, which verified predicted conformational changes. Our observation strongly supports a positive association of the surface morphology of a peptide–MHC complex to its immunogenicity. Our study offers the prospect of enhancing immunogenicity of vaccines by identifying MHC binding immunogens.

  9. Accurate 3d Textured Models of Vessels for the Improvement of the Educational Tools of a Museum

    NASA Astrophysics Data System (ADS)

    Soile, S.; Adam, K.; Ioannidis, C.; Georgopoulos, A.

    2013-02-01

    Besides the demonstration of the findings, modern museums organize educational programs which aim to experience and knowledge sharing combined with entertainment rather than to pure learning. Toward that effort, 2D and 3D digital representations are gradually replacing the traditional recording of the findings through photos or drawings. The present paper refers to a project that aims to create 3D textured models of two lekythoi that are exhibited in the National Archaeological Museum of Athens in Greece; on the surfaces of these lekythoi scenes of the adventures of Odysseus are depicted. The project is expected to support the production of an educational movie and some other relevant interactive educational programs for the museum. The creation of accurate developments of the paintings and of accurate 3D models is the basis for the visualization of the adventures of the mythical hero. The data collection was made by using a structured light scanner consisting of two machine vision cameras that are used for the determination of geometry of the object, a high resolution camera for the recording of the texture, and a DLP projector. The creation of the final accurate 3D textured model is a complicated and tiring procedure which includes the collection of geometric data, the creation of the surface, the noise filtering, the merging of individual surfaces, the creation of a c-mesh, the creation of the UV map, the provision of the texture and, finally, the general processing of the 3D textured object. For a better result a combination of commercial and in-house software made for the automation of various steps of the procedure was used. The results derived from the above procedure were especially satisfactory in terms of accuracy and quality of the model. However, the procedure was proved to be time consuming while the use of various software packages presumes the services of a specialist.

  10. Accurate localization of supernumerary mediastinal parathyroid adenomas by a combination of structural and functional imaging.

    PubMed

    Mackie, G C; Schlicht, S M

    2004-09-01

    Reoperation for refractory or recurrent hyperparathyroidism following parathyroidectomy carries the potential for increased morbidity and the possibility of failure to localize and remove the lesion intraoperatively. Reported herein are three cases demonstrating the combined use of sestamibi scintigraphy, CT and MR for accurate localization of mediastinal parathyroid adenomas.

  11. Accurate and computationally efficient mixing models for the simulation of turbulent mixing with PDF methods

    NASA Astrophysics Data System (ADS)

    Meyer, Daniel W.; Jenny, Patrick

    2013-08-01

    Different simulation methods are applicable to study turbulent mixing. When applying probability density function (PDF) methods, turbulent transport, and chemical reactions appear in closed form, which is not the case in second moment closure methods (RANS). Moreover, PDF methods provide the entire joint velocity-scalar PDF instead of a limited set of moments. In PDF methods, however, a mixing model is required to account for molecular diffusion. In joint velocity-scalar PDF methods, mixing models should also account for the joint velocity-scalar statistics, which is often under appreciated in applications. The interaction by exchange with the conditional mean (IECM) model accounts for these joint statistics, but requires velocity-conditional scalar means that are expensive to compute in spatially three dimensional settings. In this work, two alternative mixing models are presented that provide more accurate PDF predictions at reduced computational cost compared to the IECM model, since no conditional moments have to be computed. All models are tested for different mixing benchmark cases and their computational efficiencies are inspected thoroughly. The benchmark cases involve statistically homogeneous and inhomogeneous settings dealing with three streams that are characterized by two passive scalars. The inhomogeneous case clearly illustrates the importance of accounting for joint velocity-scalar statistics in the mixing model. Failure to do so leads to significant errors in the resulting scalar means, variances and other statistics.

  12. Algal productivity modeling: a step toward accurate assessments of full-scale algal cultivation.

    PubMed

    Béchet, Quentin; Chambonnière, Paul; Shilton, Andy; Guizard, Guillaume; Guieysse, Benoit

    2015-05-01

    A new biomass productivity model was parameterized for Chlorella vulgaris using short-term (<30 min) oxygen productivities from algal microcosms exposed to 6 light intensities (20-420 W/m(2)) and 6 temperatures (5-42 °C). The model was then validated against experimental biomass productivities recorded in bench-scale photobioreactors operated under 4 light intensities (30.6-74.3 W/m(2)) and 4 temperatures (10-30 °C), yielding an accuracy of ± 15% over 163 days of cultivation. This modeling approach addresses major challenges associated with the accurate prediction of algal productivity at full-scale. Firstly, while most prior modeling approaches have only considered the impact of light intensity on algal productivity, the model herein validated also accounts for the critical impact of temperature. Secondly, this study validates a theoretical approach to convert short-term oxygen productivities into long-term biomass productivities. Thirdly, the experimental methodology used has the practical advantage of only requiring one day of experimental work for complete model parameterization. The validation of this new modeling approach is therefore an important step for refining feasibility assessments of algae biotechnologies.

  13. Algal productivity modeling: a step toward accurate assessments of full-scale algal cultivation.

    PubMed

    Béchet, Quentin; Chambonnière, Paul; Shilton, Andy; Guizard, Guillaume; Guieysse, Benoit

    2015-05-01

    A new biomass productivity model was parameterized for Chlorella vulgaris using short-term (<30 min) oxygen productivities from algal microcosms exposed to 6 light intensities (20-420 W/m(2)) and 6 temperatures (5-42 °C). The model was then validated against experimental biomass productivities recorded in bench-scale photobioreactors operated under 4 light intensities (30.6-74.3 W/m(2)) and 4 temperatures (10-30 °C), yielding an accuracy of ± 15% over 163 days of cultivation. This modeling approach addresses major challenges associated with the accurate prediction of algal productivity at full-scale. Firstly, while most prior modeling approaches have only considered the impact of light intensity on algal productivity, the model herein validated also accounts for the critical impact of temperature. Secondly, this study validates a theoretical approach to convert short-term oxygen productivities into long-term biomass productivities. Thirdly, the experimental methodology used has the practical advantage of only requiring one day of experimental work for complete model parameterization. The validation of this new modeling approach is therefore an important step for refining feasibility assessments of algae biotechnologies. PMID:25502920

  14. Simple and accurate modelling of the gravitational potential produced by thick and thin exponential discs

    NASA Astrophysics Data System (ADS)

    Smith, R.; Flynn, C.; Candlish, G. N.; Fellhauer, M.; Gibson, B. K.

    2015-04-01

    We present accurate models of the gravitational potential produced by a radially exponential disc mass distribution. The models are produced by combining three separate Miyamoto-Nagai discs. Such models have been used previously to model the disc of the Milky Way, but here we extend this framework to allow its application to discs of any mass, scalelength, and a wide range of thickness from infinitely thin to near spherical (ellipticities from 0 to 0.9). The models have the advantage of simplicity of implementation, and we expect faster run speeds over a double exponential disc treatment. The potentials are fully analytical, and differentiable at all points. The mass distribution of our models deviates from the radial mass distribution of a pure exponential disc by <0.4 per cent out to 4 disc scalelengths, and <1.9 per cent out to 10 disc scalelengths. We tabulate fitting parameters which facilitate construction of exponential discs for any scalelength, and a wide range of disc thickness (a user-friendly, web-based interface is also available). Our recipe is well suited for numerical modelling of the tidal effects of a giant disc galaxy on star clusters or dwarf galaxies. We consider three worked examples; the Milky Way thin and thick disc, and a discy dwarf galaxy.

  15. Fractional Order Modeling of Atmospheric Turbulence - A More Accurate Modeling Methodology for Aero Vehicles

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    2014-01-01

    The presentation covers a recently developed methodology to model atmospheric turbulence as disturbances for aero vehicle gust loads and for controls development like flutter and inlet shock position. The approach models atmospheric turbulence in their natural fractional order form, which provides for more accuracy compared to traditional methods like the Dryden model, especially for high speed vehicle. The presentation provides a historical background on atmospheric turbulence modeling and the approaches utilized for air vehicles. This is followed by the motivation and the methodology utilized to develop the atmospheric turbulence fractional order modeling approach. Some examples covering the application of this method are also provided, followed by concluding remarks.

  16. Oxygen-Enhanced MRI Accurately Identifies, Quantifies, and Maps Tumor Hypoxia in Preclinical Cancer Models.

    PubMed

    O'Connor, James P B; Boult, Jessica K R; Jamin, Yann; Babur, Muhammad; Finegan, Katherine G; Williams, Kaye J; Little, Ross A; Jackson, Alan; Parker, Geoff J M; Reynolds, Andrew R; Waterton, John C; Robinson, Simon P

    2016-02-15

    There is a clinical need for noninvasive biomarkers of tumor hypoxia for prognostic and predictive studies, radiotherapy planning, and therapy monitoring. Oxygen-enhanced MRI (OE-MRI) is an emerging imaging technique for quantifying the spatial distribution and extent of tumor oxygen delivery in vivo. In OE-MRI, the longitudinal relaxation rate of protons (ΔR1) changes in proportion to the concentration of molecular oxygen dissolved in plasma or interstitial tissue fluid. Therefore, well-oxygenated tissues show positive ΔR1. We hypothesized that the fraction of tumor tissue refractory to oxygen challenge (lack of positive ΔR1, termed "Oxy-R fraction") would be a robust biomarker of hypoxia in models with varying vascular and hypoxic features. Here, we demonstrate that OE-MRI signals are accurate, precise, and sensitive to changes in tumor pO2 in highly vascular 786-0 renal cancer xenografts. Furthermore, we show that Oxy-R fraction can quantify the hypoxic fraction in multiple models with differing hypoxic and vascular phenotypes, when used in combination with measurements of tumor perfusion. Finally, Oxy-R fraction can detect dynamic changes in hypoxia induced by the vasomodulator agent hydralazine. In contrast, more conventional biomarkers of hypoxia (derived from blood oxygenation-level dependent MRI and dynamic contrast-enhanced MRI) did not relate to tumor hypoxia consistently. Our results show that the Oxy-R fraction accurately quantifies tumor hypoxia noninvasively and is immediately translatable to the clinic.

  17. An accurate parameterization of the infrared radiative properties of cirrus clouds for climate models

    SciTech Connect

    Fu, Q.; Sun, W.B.; Yang, P.

    1998-09-01

    An accurate parameterization is presented for the infrared radiative properties of cirrus clouds. For the single-scattering calculations, a composite scheme is developed for randomly oriented hexagonal ice crystals by comparing results from Mie theory, anomalous diffraction theory (ADT), the geometric optics method (GOM), and the finite-difference time domain technique. This scheme employs a linear combination of single-scattering properties from the Mie theory, ADT, and GOM, which is accurate for a wide range of size parameters. Following the approach of Q. Fu, the extinction coefficient, absorption coefficient, and asymmetry factor are parameterized as functions of the cloud ice water content and generalized effective size (D{sub ge}). The present parameterization of the single-scattering properties of cirrus clouds is validated by examining the bulk radiative properties for a wide range of atmospheric conditions. Compared with reference results, the typical relative error in emissivity due to the parameterization is {approximately}2.2%. The accuracy of this parameterization guarantees its reliability in applications to climate models. The present parameterization complements the scheme for the solar radiative properties of cirrus clouds developed by Q. Fu for use in numerical models.

  18. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement

    PubMed Central

    Xiao, Suzhi; Tao, Wei; Zhao, Hui

    2016-01-01

    In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the ’phase to 3D coordinates transformation’ are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement. PMID:27136553

  19. Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.

    2016-03-01

    Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.

  20. An Accurate Parameterization of the Infrared Radiative Properties of Cirrus Clouds for Climate Models.

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Yang, Ping; Sun, W. B.

    1998-09-01

    An accurate parameterization is presented for the infrared radiative properties of cirrus clouds. For the single-scattering calculations, a composite scheme is developed for randomly oriented hexagonal ice crystals by comparing results from Mie theory, anomalous diffraction theory (ADT), the geometric optics method (GOM), and the finite-difference time domain technique. This scheme employs a linear combination of single-scattering properties from the Mie theory, ADT, and GOM, which is accurate for a wide range of size parameters. Following the approach of Q. Fu, the extinction coefficient, absorption coefficient, and asymmetry factor are parameterized as functions of the cloud ice water content and generalized effective size (Dge). The present parameterization of the single-scattering properties of cirrus clouds is validated by examining the bulk radiative properties for a wide range of atmospheric conditions. Compared with reference results, the typical relative error in emissivity due to the parameterization is 2.2%. The accuracy of this parameterization guarantees its reliability in applications to climate models. The present parameterization complements the scheme for the solar radiative properties of cirrus clouds developed by Q. Fu for use in numerical models.

  1. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement.

    PubMed

    Xiao, Suzhi; Tao, Wei; Zhao, Hui

    2016-01-01

    In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the 'phase to 3D coordinates transformation' are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement. PMID:27136553

  2. A Biomechanical Model of the Scapulothoracic Joint to Accurately Capture Scapular Kinematics during Shoulder Movements

    PubMed Central

    Seth, Ajay; Matias, Ricardo; Veloso, António P.; Delp, Scott L.

    2016-01-01

    The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1) elevation and 2) abduction of the scapula on an ellipsoidal thoracic surface, 3) upward rotation of the scapula normal to the thoracic surface, and 4) internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual’s anthropometry. We compared the model to “gold standard” bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models. PMID:26734761

  3. Allowable forward model misspecification for accurate basis decomposition in a silicon detector based spectral CT.

    PubMed

    Bornefalk, Hans; Persson, Mats; Danielsson, Mats

    2015-03-01

    Material basis decomposition in the sinogram domain requires accurate knowledge of the forward model in spectral computed tomography (CT). Misspecifications over a certain limit will result in biased estimates and make quantum limited (where statistical noise dominates) quantitative CT difficult. We present a method whereby users can determine the degree of allowed misspecification error in a spectral CT forward model and still have quantification errors that are limited by the inherent statistical uncertainty. For a particular silicon detector based spectral CT system, we conclude that threshold determination is the most critical factor and that the bin edges need to be known to within 0.15 keV in order to be able to perform quantum limited material basis decomposition. The method as such is general to all multibin systems.

  4. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  5. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979

  6. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    PubMed Central

    Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.

    2015-01-01

    Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational. PMID:25615870

  7. Dielectric-dependent Density Functionals for Accurate Electronic Structure Calculations of Molecules and Solids

    NASA Astrophysics Data System (ADS)

    Skone, Jonathan; Govoni, Marco; Galli, Giulia

    Dielectric-dependent hybrid [DDH] functionals have recently been shown to yield highly accurate energy gaps and dielectric constants for a wide variety of solids, at a computational cost considerably less than standard GW calculations. The fraction of exact exchange included in the definition of DDH functionals depends (self-consistently) on the dielectric constant of the material. In the present talk we introduce a range-separated (RS) version of DDH functionals where short and long-range components are matched using material dependent, non-empirical parameters. Comparing with state of the art GW calculations and experiment, we show that such RS hybrids yield accurate electronic properties of both molecules and solids, including energy gaps, photoelectron spectra and absolute ionization potentials. This work was supported by NSF-CCI Grant Number NSF-CHE-0802907 and DOE-BES.

  8. Are Quasi-Steady-State Approximated Models Suitable for Quantifying Intrinsic Noise Accurately?

    PubMed Central

    Sengupta, Dola; Kar, Sandip

    2015-01-01

    Large gene regulatory networks (GRN) are often modeled with quasi-steady-state approximation (QSSA) to reduce the huge computational time required for intrinsic noise quantification using Gillespie stochastic simulation algorithm (SSA). However, the question still remains whether the stochastic QSSA model measures the intrinsic noise as accurately as the SSA performed for a detailed mechanistic model or not? To address this issue, we have constructed mechanistic and QSSA models for few frequently observed GRNs exhibiting switching behavior and performed stochastic simulations with them. Our results strongly suggest that the performance of a stochastic QSSA model in comparison to SSA performed for a mechanistic model critically relies on the absolute values of the mRNA and protein half-lives involved in the corresponding GRN. The extent of accuracy level achieved by the stochastic QSSA model calculations will depend on the level of bursting frequency generated due to the absolute value of the half-life of either mRNA or protein or for both the species. For the GRNs considered, the stochastic QSSA quantifies the intrinsic noise at the protein level with greater accuracy and for larger combinations of half-life values of mRNA and protein, whereas in case of mRNA the satisfactory accuracy level can only be reached for limited combinations of absolute values of half-lives. Further, we have clearly demonstrated that the abundance levels of mRNA and protein hardly matter for such comparison between QSSA and mechanistic models. Based on our findings, we conclude that QSSA model can be a good choice for evaluating intrinsic noise for other GRNs as well, provided we make a rational choice based on experimental half-life values available in literature. PMID:26327626

  9. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.

    PubMed

    Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish

    2016-04-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143

  10. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina

    PubMed Central

    Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish

    2016-01-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143

  11. Optimal Cluster Mill Pass Scheduling With an Accurate and Rapid New Strip Crown Model

    NASA Astrophysics Data System (ADS)

    Malik, Arif S.; Grandhi, Ramana V.; Zipf, Mark E.

    2007-05-01

    Besides the requirement to roll coiled sheet at high levels of productivity, the optimal pass scheduling of cluster-type reversing cold mills presents the added challenge of assigning mill parameters that facilitate the best possible strip flatness. The pressures of intense global competition, and the requirements for increasingly thinner, higher quality specialty sheet products that are more difficult to roll, continue to force metal producers to commission innovative flatness-control technologies. This means that during the on-line computerized set-up of rolling mills, the mathematical model should not only determine the minimum total number of passes and maximum rolling speed, it should simultaneously optimize the pass-schedule so that desired flatness is assured, either by manual or automated means. In many cases today, however, on-line prediction of strip crown and corresponding flatness for the complex cluster-type rolling mills is typically addressed either by trial and error, by approximate deflection models for equivalent vertical roll-stacks, or by non-physical pattern recognition style models. The abundance of the aforementioned methods is largely due to the complexity of cluster-type mill configurations and the lack of deflection models with sufficient accuracy and speed for on-line use. Without adequate assignment of the pass-schedule set-up parameters, it may be difficult or impossible to achieve the required strip flatness. In this paper, we demonstrate optimization of cluster mill pass-schedules using a new accurate and rapid strip crown model. This pass-schedule optimization includes computations of the predicted strip thickness profile to validate mathematical constraints. In contrast to many of the existing methods for on-line prediction of strip crown and flatness on cluster mills, the demonstrated method requires minimal prior tuning and no extensive training with collected mill data. To rapidly and accurately solve the multi-contact problem

  12. Development and application of accurate analytical models for single active electron potentials

    NASA Astrophysics Data System (ADS)

    Miller, Michelle; Jaron-Becker, Agnieszka; Becker, Andreas

    2015-05-01

    The single active electron (SAE) approximation is a theoretical model frequently employed to study scenarios in which inner-shell electrons may productively be treated as frozen spectators to a physical process of interest, and accurate analytical approximations for these potentials are sought as a useful simulation tool. Density function theory is often used to construct a SAE potential, requiring that a further approximation for the exchange correlation functional be enacted. In this study, we employ the Krieger, Li, and Iafrate (KLI) modification to the optimized-effective-potential (OEP) method to reduce the complexity of the problem to the straightforward solution of a system of linear equations through simple arguments regarding the behavior of the exchange-correlation potential in regions where a single orbital dominates. We employ this method for the solution of atomic and molecular potentials, and use the resultant curve to devise a systematic construction for highly accurate and useful analytical approximations for several systems. Supported by the U.S. Department of Energy (Grant No. DE-FG02-09ER16103), and the U.S. National Science Foundation (Graduate Research Fellowship, Grants No. PHY-1125844 and No. PHY-1068706).

  13. Structural Equation Model Trees

    ERIC Educational Resources Information Center

    Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman

    2013-01-01

    In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree…

  14. Revisiting the blind tests in crystal structure prediction: accurate energy ranking of molecular crystals.

    PubMed

    Asmadi, Aldi; Neumann, Marcus A; Kendrick, John; Girard, Pascale; Perrin, Marc-Antoine; Leusen, Frank J J

    2009-12-24

    In the 2007 blind test of crystal structure prediction hosted by the Cambridge Crystallographic Data Centre (CCDC), a hybrid DFT/MM method correctly ranked each of the four experimental structures as having the lowest lattice energy of all the crystal structures predicted for each molecule. The work presented here further validates this hybrid method by optimizing the crystal structures (experimental and submitted) of the first three CCDC blind tests held in 1999, 2001, and 2004. Except for the crystal structures of compound IX, all structures were reminimized and ranked according to their lattice energies. The hybrid method computes the lattice energy of a crystal structure as the sum of the DFT total energy and a van der Waals (dispersion) energy correction. Considering all four blind tests, the crystal structure with the lowest lattice energy corresponds to the experimentally observed structure for 12 out of 14 molecules. Moreover, good geometrical agreement is observed between the structures determined by the hybrid method and those measured experimentally. In comparison with the correct submissions made by the blind test participants, all hybrid optimized crystal structures (apart from compound II) have the smallest calculated root mean squared deviations from the experimentally observed structures. It is predicted that a new polymorph of compound V exists under pressure.

  15. Revisiting the blind tests in crystal structure prediction: accurate energy ranking of molecular crystals.

    PubMed

    Asmadi, Aldi; Neumann, Marcus A; Kendrick, John; Girard, Pascale; Perrin, Marc-Antoine; Leusen, Frank J J

    2009-12-24

    In the 2007 blind test of crystal structure prediction hosted by the Cambridge Crystallographic Data Centre (CCDC), a hybrid DFT/MM method correctly ranked each of the four experimental structures as having the lowest lattice energy of all the crystal structures predicted for each molecule. The work presented here further validates this hybrid method by optimizing the crystal structures (experimental and submitted) of the first three CCDC blind tests held in 1999, 2001, and 2004. Except for the crystal structures of compound IX, all structures were reminimized and ranked according to their lattice energies. The hybrid method computes the lattice energy of a crystal structure as the sum of the DFT total energy and a van der Waals (dispersion) energy correction. Considering all four blind tests, the crystal structure with the lowest lattice energy corresponds to the experimentally observed structure for 12 out of 14 molecules. Moreover, good geometrical agreement is observed between the structures determined by the hybrid method and those measured experimentally. In comparison with the correct submissions made by the blind test participants, all hybrid optimized crystal structures (apart from compound II) have the smallest calculated root mean squared deviations from the experimentally observed structures. It is predicted that a new polymorph of compound V exists under pressure. PMID:19950907

  16. Accurate and efficient modeling of global seismic wave propagation for an attenuative Earth model including the center

    NASA Astrophysics Data System (ADS)

    Toyokuni, Genti; Takenaka, Hiroshi

    2012-06-01

    We propose a method for modeling global seismic wave propagation through an attenuative Earth model including the center. This method enables accurate and efficient computations since it is based on the 2.5-D approach, which solves wave equations only on a 2-D cross section of the whole Earth and can correctly model 3-D geometrical spreading. We extend a numerical scheme for the elastic waves in spherical coordinates using the finite-difference method (FDM), to solve the viscoelastodynamic equation. For computation of realistic seismic wave propagation, incorporation of anelastic attenuation is crucial. Since the nature of Earth material is both elastic solid and viscous fluid, we should solve stress-strain relations of viscoelastic material, including attenuative structures. These relations represent the stress as a convolution integral in time, which has had difficulty treating viscoelasticity in time-domain computation such as the FDM. However, we now have a method using so-called memory variables, invented in the 1980s, followed by improvements in Cartesian coordinates. Arbitrary values of the quality factor (Q) can be incorporated into the wave equation via an array of Zener bodies. We also introduce the multi-domain, an FD grid of several layers with different grid spacings, into our FDM scheme. This allows wider lateral grid spacings with depth, so as not to perturb the FD stability criterion around the Earth center. In addition, we propose a technique to avoid the singularity problem of the wave equation in spherical coordinates at the Earth center. We develop a scheme to calculate wavefield variables on this point, based on linear interpolation for the velocity-stress, staggered-grid FDM. This scheme is validated through a comparison of synthetic seismograms with those obtained by the Direct Solution Method for a spherically symmetric Earth model, showing excellent accuracy for our FDM scheme. As a numerical example, we apply the method to simulate seismic

  17. Accurate metal-site structures in proteins obtained by combining experimental data and quantum chemistry.

    PubMed

    Ryde, Ulf

    2007-02-14

    The use of molecular mechanics calculations to supplement experimental data in standard X-ray crystallography and NMR refinements is discussed and it is shown that structures can be locally improved by the use of quantum chemical calculations. Such calculations can also be used to interpret the structures, e.g. to decide the protonation state of metal-bound ligands. They have shown that metal sites in crystal structures are frequently photoreduced or disordered, which makes the interpretation of the structures hard. Similar methods can be used for EXAFS refinements to obtain a full atomic structure, rather than a set of metal-ligand distances.

  18. Modelling ionospheric density structures

    NASA Technical Reports Server (NTRS)

    Schunk, R. W.; Sojka, J. J.

    1989-01-01

    Large-scale density structures are a common feature in the high-latitude ionsphere. The structures were observed in the dayside cusp, polar cap, and nocturnal auroral region over a range of altitudes, including the E-region, F-region and topside ionosphere. The origins, lifetimes and transport characteristics of large-scale density structures were studied with the aid of a three-dimensional, time-dependent ionospheric model. Blob creation due to particle precipitation, the effect that structured electric fields have on the ionosphere, and the lifetimes and transport characteristics of density structures for different seasonal, solar cycle, and interplanetary magnetic field (IMF) conditions were studied. The main conclusions drawn are: (1) the observed precipitation energy fluxes are sufficient for blob creation if the plasma is exposed to the precipitation for 5 to 10 minutes; (2) structured electric fields produce structured electron densities, ion temperatures, and ion composition; (3) the lifetime of an F-region density structure depends on several factors, including the initial location where it was formed, the magnitude of the perturbation, season, solar cycle and IMF; and (4) depending on the IMF, horizontal plasma convection can cause an initial structure to break up into multiple structures of various sizes, remain as a single distorted structure, or become stretched into elongated segments.

  19. Modeling methodology for the accurate and prompt prediction of symptomatic events in chronic diseases.

    PubMed

    Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L

    2016-08-01

    Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. PMID:27260782

  20. Do Ecological Niche Models Accurately Identify Climatic Determinants of Species Ranges?

    PubMed

    Searcy, Christopher A; Shaffer, H Bradley

    2016-04-01

    Defining species' niches is central to understanding their distributions and is thus fundamental to basic ecology and climate change projections. Ecological niche models (ENMs) are a key component of making accurate projections and include descriptions of the niche in terms of both response curves and rankings of variable importance. In this study, we evaluate Maxent's ranking of environmental variables based on their importance in delimiting species' range boundaries by asking whether these same variables also govern annual recruitment based on long-term demographic studies. We found that Maxent-based assessments of variable importance in setting range boundaries in the California tiger salamander (Ambystoma californiense; CTS) correlate very well with how important those variables are in governing ongoing recruitment of CTS at the population level. This strong correlation suggests that Maxent's ranking of variable importance captures biologically realistic assessments of factors governing population persistence. However, this result holds only when Maxent models are built using best-practice procedures and variables are ranked based on permutation importance. Our study highlights the need for building high-quality niche models and provides encouraging evidence that when such models are built, they can reflect important aspects of a species' ecology. PMID:27028071

  1. Universal model for accurate calculation of tracer diffusion coefficients in gas, liquid and supercritical systems.

    PubMed

    Lito, Patrícia F; Magalhães, Ana L; Gomes, José R B; Silva, Carlos M

    2013-05-17

    In this work it is presented a new model for accurate calculation of binary diffusivities (D12) of solutes infinitely diluted in gas, liquid and supercritical solvents. It is based on a Lennard-Jones (LJ) model, and contains two parameters: the molecular diameter of the solvent and a diffusion activation energy. The model is universal since it is applicable to polar, weakly polar, and non-polar solutes and/or solvents, over wide ranges of temperature and density. Its validation was accomplished with the largest database ever compiled, namely 487 systems with 8293 points totally, covering polar (180 systems/2335 points) and non-polar or weakly polar (307 systems/5958 points) mixtures, for which the average errors were 2.65% and 2.97%, respectively. With regard to the physical states of the systems, the average deviations achieved were 1.56% for gaseous (73 systems/1036 points), 2.90% for supercritical (173 systems/4398 points), and 2.92% for liquid (241 systems/2859 points). Furthermore, the model exhibited excellent prediction ability. Ten expressions from the literature were adopted for comparison, but provided worse results or were not applicable to polar systems. A spreadsheet for D12 calculation is provided online for users in Supplementary Data.

  2. An accurate and efficient Lagrangian sub-grid model for multi-particle dispersion

    NASA Astrophysics Data System (ADS)

    Toschi, Federico; Mazzitelli, Irene; Lanotte, Alessandra S.

    2014-11-01

    Many natural and industrial processes involve the dispersion of particle in turbulent flows. Despite recent theoretical progresses in the understanding of particle dynamics in simple turbulent flows, complex geometries often call for numerical approaches based on eulerian Large Eddy Simulation (LES). One important issue related to the Lagrangian integration of tracers in under-resolved velocity fields is connected to the lack of spatial correlations at unresolved scales. Here we propose a computationally efficient Lagrangian model for the sub-grid velocity of tracers dispersed in statistically homogeneous and isotropic turbulent flows. The model incorporates the multi-scale nature of turbulent temporal and spatial correlations that are essential to correctly reproduce the dynamics of multi-particle dispersion. The new model is able to describe the Lagrangian temporal and spatial correlations in clouds of particles. In particular we show that pairs and tetrads dispersion compare well with results from Direct Numerical Simulations of statistically isotropic and homogeneous 3d turbulence. This model may offer an accurate and efficient way to describe multi-particle dispersion in under resolved turbulent velocity fields such as the one employed in eulerian LES. This work is part of the research programmes FP112 of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO). We acknowledge support from the EU COST Action MP0806.

  3. Absorption spectroscopy of the rubidium dimer in an overheated vapor: An accurate check of molecular structure and dynamics

    SciTech Connect

    Beuc, R.; Movre, M.; Horvatic, V.; Vadla, C.; Dulieu, O.; Aymar, M.

    2007-03-15

    Experimental studies of the absorption spectrum of the Rb{sub 2} dimer are performed in the 600-1100 nm wavelength range for temperatures between 615 and 745 K. The reduced absorption coefficient is measured by spatially resolved white light absorption in overheated rubidium vapor with a radial temperature gradient, which enables simultaneous measurements at different temperatures. Semiclassical and quantum spectral simulations are obtained by taking into account all possible transitions involving the potential curves stemming from the 5 {sup 2}S+5 {sup 2}S and 5 {sup 2}S+5 {sup 2}P asymptotes. The most accurate experimental potential curves are used where available, and newly calculated potential curves and transition dipole moments otherwise. The overall consistency of the theoretical model with the experimental interpretation is obtained only if the radial dependence of both the calculated transition dipole moments and the spin-orbit coupling is taken into account. This highlights the low-resolution absorption spectroscopy as a valuable tool for checking the accuracy of molecular electronic structure calculations.

  4. Models in biology: ‘accurate descriptions of our pathetic thinking’

    PubMed Central

    2014-01-01

    In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as ‘predictive’, in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders. PMID:24886484

  5. Application of thin plate splines for accurate regional ionosphere modeling with multi-GNSS data

    NASA Astrophysics Data System (ADS)

    Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej

    2016-04-01

    GNSS-derived regional ionosphere models are widely used in both precise positioning, ionosphere and space weather studies. However, their accuracy is often not sufficient to support precise positioning, RTK in particular. In this paper, we presented new approach that uses solely carrier phase multi-GNSS observables and thin plate splines (TPS) for accurate ionospheric TEC modeling. TPS is a closed solution of a variational problem minimizing both the sum of squared second derivatives of a smoothing function and the deviation between data points and this function. This approach is used in UWM-rt1 regional ionosphere model developed at UWM in Olsztyn. The model allows for providing ionospheric TEC maps with high spatial and temporal resolutions - 0.2x0.2 degrees and 2.5 minutes, respectively. For TEC estimation, EPN and EUPOS reference station data is used. The maps are available with delay of 15-60 minutes. In this paper we compare the performance of UWM-rt1 model with IGS global and CODE regional ionosphere maps during ionospheric storm that took place on March 17th, 2015. During this storm, the TEC level over Europe doubled comparing to earlier quiet days. The performance of the UWM-rt1 model was validated by (a) comparison to reference double-differenced ionospheric corrections over selected baselines, and (b) analysis of post-fit residuals to calibrated carrier phase geometry-free observational arcs at selected test stations. The results show a very good performance of UWM-rt1 model. The obtained post-fit residuals in case of UWM maps are lower by one order of magnitude comparing to IGS maps. The accuracy of UWM-rt1 -derived TEC maps is estimated at 0.5 TECU. This may be directly translated to the user positioning domain.

  6. Seeking: Accurate Measurement Techniques for Deep-Bone Density and Structure

    NASA Technical Reports Server (NTRS)

    Sibonga, Jean

    2009-01-01

    We are seeking a clinically-useful technology with enough sensitivity to assess the microstructure of "spongy" bone that is found in the marrow cavities of whole bones. However, this technology must be for skeletal sites surrounded by layers of soft tissues, such as the spine and the hip. Soft tissue interferes with conventional imaging and using a more accessible area -- for example, the wrist or the ankle of limbs-- as a proxy for the less accessible skeletal regions, will not be accurate. A non-radioactive technology is strongly preferred.

  7. Accurate coronary modeling procedure using 2D calibrated projections based on 2D centerline points on a single projection

    NASA Astrophysics Data System (ADS)

    Movassaghi, Babak; Rasche, Volker; Viergever, Max A.; Niessen, Wiro J.

    2004-05-01

    For the diagnosis of ischemic heart disease, accurate quantitative analysis of the coronary arteries is important. In coronary angiography, a number of projections is acquired from which 3D models of the coronaries can be reconstructed. A signifcant limitation of the current 3D modeling procedures is the required user interaction for defining the centerlines of the vessel structures in the 2D projections. Currently, the 3D centerlines of the coronary tree structure are calculated based on the interactively determined centerlines in two projections. For every interactively selected centerline point in a first projection the corresponding point in a second projection has to be determined interactively by the user. The correspondence is obtained based on the epipolar-geometry. In this paper a method is proposed to retrieve all the information required for the modeling procedure, by the interactive determination of the 2D centerline-points in only one projection. For every determined 2D centerline-point the corresponding 3D centerline-point is calculated by the analysis of the 1D gray value functions of the corresponding epipolarlines in space for all available 2D projections. This information is then used to build a 3D representation of the coronary arteries using coronary modeling techniques. The approach is illustrated on the analysis of calibrated phantom and calibrated coronary projection data.

  8. SMARTIES: Spheroids Modelled Accurately with a Robust T-matrix Implementation for Electromagnetic Scattering

    NASA Astrophysics Data System (ADS)

    Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.

    2016-03-01

    SMARTIES calculates the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. This suite of MATLAB codes provides a fully documented implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. Included are scripts that cover a range of scattering problems relevant to nanophotonics and plasmonics, including calculation of far-field scattering and absorption cross-sections for fixed incidence orientation, orientation-averaged cross-sections and scattering matrix, surface-field calculations as well as near-fields, wavelength-dependent near-field and far-field properties, and access to lower-level functions implementing the T-matrix calculations, including the T-matrix elements which may be calculated more accurately than with competing codes.

  9. Accurate calculation of conductive conductances in complex geometries for spacecrafts thermal models

    NASA Astrophysics Data System (ADS)

    Garmendia, Iñaki; Anglada, Eva; Vallejo, Haritz; Seco, Miguel

    2016-02-01

    The thermal subsystem of spacecrafts and payloads is always designed with the help of Thermal Mathematical Models. In the case of the Thermal Lumped Parameter (TLP) method, the non-linear system of equations that is created is solved to calculate the temperature distribution and the heat power that goes between nodes. The accuracy of the results depends largely on the appropriate calculation of the conductive and radiative conductances. Several established methods for the determination of conductive conductances exist but they present some limitations for complex geometries. Two new methods are proposed in this paper to calculate accurately these conductive conductances: The Extended Far Field method and the Mid-Section method. Both are based on a finite element calculation but while the Extended Far Field method uses the calculation of node mean temperatures, the Mid-Section method is based on assuming specific temperature values. They are compared with traditionally used methods showing the advantages of these two new methods.

  10. A fast and accurate PCA based radiative transfer model: Extension to the broadband shortwave region

    NASA Astrophysics Data System (ADS)

    Kopparla, Pushkar; Natraj, Vijay; Spurr, Robert; Shia, Run-Lie; Crisp, David; Yung, Yuk L.

    2016-04-01

    Accurate radiative transfer (RT) calculations are necessary for many earth-atmosphere applications, from remote sensing retrieval to climate modeling. A Principal Component Analysis (PCA)-based spectral binning method has been shown to provide an order of magnitude increase in computational speed while maintaining an overall accuracy of 0.01% (compared to line-by-line calculations) over narrow spectral bands. In this paper, we have extended the PCA method for RT calculations over the entire shortwave region of the spectrum from 0.3 to 3 microns. The region is divided into 33 spectral fields covering all major gas absorption regimes. We find that the RT performance runtimes are shorter by factors between 10 and 100, while root mean square errors are of order 0.01%.

  11. An Accurately Stable Thermo-Hydro-Mechanical Model for Geo-Environmental Simulations

    NASA Astrophysics Data System (ADS)

    Gambolati, G.; Castelletto, N.; Ferronato, M.

    2011-12-01

    In real-world applications involving complex 3D heterogeneous domains the use of advanced numerical algorithms is of paramount importance to stabily, accurately and efficiently solve the coupled system of partial differential equations governing the mass and the energy balance in deformable porous media. The present communication discusses a novel coupled 3-D numerical model based on a suitable combination of Finite Elements (FEs), Mixed FEs (MFEs), and Finite Volumes (FVs) developed with the aim at stabilizing the numerical solution. Elemental pressures and temperatures, nodal displacements and face normal Darcy and Fourier fluxes are the selected primary variables. Such an approach provides an element-wise conservative velocity field, with both pore pressure and stress having the same order of approximation, and allows for the accurate prediction of sharp temperature convective fronts. In particular, the flow-deformation problem is addressed jointly by FEs and MFEs and is coupled to the heat transfer equation using an ad hoc time splitting technique that separates the time temperature evolution into two partial differential equations, accounting for the convective and the diffusive contribution, respectively. The convective part is addressed by a FV scheme which proves effective in treating sharp convective fronts, while the diffusive part is solved by a MFE formulation. A staggered technique is then implemented for the global solution of the coupled thermo-hydro-mechanical problem, solving iteratively the flow-deformation and the heat transport at each time step. Finally, the model is successfully experimented with in realistic applications dealing with geothermal energy extraction and injection.

  12. Parallel kinetic Monte Carlo simulation framework incorporating accurate models of adsorbate lateral interactions

    NASA Astrophysics Data System (ADS)

    Nielsen, Jens; d'Avezac, Mayeul; Hetherington, James; Stamatakis, Michail

    2013-12-01

    Ab initio kinetic Monte Carlo (KMC) simulations have been successfully applied for over two decades to elucidate the underlying physico-chemical phenomena on the surfaces of heterogeneous catalysts. These simulations necessitate detailed knowledge of the kinetics of elementary reactions constituting the reaction mechanism, and the energetics of the species participating in the chemistry. The information about the energetics is encoded in the formation energies of gas and surface-bound species, and the lateral interactions between adsorbates on the catalytic surface, which can be modeled at different levels of detail. The majority of previous works accounted for only pairwise-additive first nearest-neighbor interactions. More recently, cluster-expansion Hamiltonians incorporating long-range interactions and many-body terms have been used for detailed estimations of catalytic rate [C. Wu, D. J. Schmidt, C. Wolverton, and W. F. Schneider, J. Catal. 286, 88 (2012)]. In view of the increasing interest in accurate predictions of catalytic performance, there is a need for general-purpose KMC approaches incorporating detailed cluster expansion models for the adlayer energetics. We have addressed this need by building on the previously introduced graph-theoretical KMC framework, and we have developed Zacros, a FORTRAN2003 KMC package for simulating catalytic chemistries. To tackle the high computational cost in the presence of long-range interactions we introduce parallelization with OpenMP. We further benchmark our framework by simulating a KMC analogue of the NO oxidation system established by Schneider and co-workers [J. Catal. 286, 88 (2012)]. We show that taking into account only first nearest-neighbor interactions may lead to large errors in the prediction of the catalytic rate, whereas for accurate estimates thereof, one needs to include long-range terms in the cluster expansion.

  13. Accurate first-principles structures and energies of diversely bonded systems from an efficient density functional

    NASA Astrophysics Data System (ADS)

    Sun, Jianwei; Remsing, Richard C.; Zhang, Yubo; Sun, Zhaoru; Ruzsinszky, Adrienn; Peng, Haowei; Yang, Zenghui; Paul, Arpita; Waghmare, Umesh; Wu, Xifan; Klein, Michael L.; Perdew, John P.

    2016-09-01

    One atom or molecule binds to another through various types of bond, the strengths of which range from several meV to several eV. Although some computational methods can provide accurate descriptions of all bond types, those methods are not efficient enough for many studies (for example, large systems, ab initio molecular dynamics and high-throughput searches for functional materials). Here, we show that the recently developed non-empirical strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) within the density functional theory framework predicts accurate geometries and energies of diversely bonded molecules and materials (including covalent, metallic, ionic, hydrogen and van der Waals bonds). This represents a significant improvement at comparable efficiency over its predecessors, the GGAs that currently dominate materials computation. Often, SCAN matches or improves on the accuracy of a computationally expensive hybrid functional, at almost-GGA cost. SCAN is therefore expected to have a broad impact on chemistry and materials science.

  14. Accurate first-principles structures and energies of diversely bonded systems from an efficient density functional.

    PubMed

    Sun, Jianwei; Remsing, Richard C; Zhang, Yubo; Sun, Zhaoru; Ruzsinszky, Adrienn; Peng, Haowei; Yang, Zenghui; Paul, Arpita; Waghmare, Umesh; Wu, Xifan; Klein, Michael L; Perdew, John P

    2016-09-01

    One atom or molecule binds to another through various types of bond, the strengths of which range from several meV to several eV. Although some computational methods can provide accurate descriptions of all bond types, those methods are not efficient enough for many studies (for example, large systems, ab initio molecular dynamics and high-throughput searches for functional materials). Here, we show that the recently developed non-empirical strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) within the density functional theory framework predicts accurate geometries and energies of diversely bonded molecules and materials (including covalent, metallic, ionic, hydrogen and van der Waals bonds). This represents a significant improvement at comparable efficiency over its predecessors, the GGAs that currently dominate materials computation. Often, SCAN matches or improves on the accuracy of a computationally expensive hybrid functional, at almost-GGA cost. SCAN is therefore expected to have a broad impact on chemistry and materials science. PMID:27554409

  15. Random generalized linear model: a highly accurate and interpretable ensemble predictor

    PubMed Central

    2013-01-01

    Background Ensemble predictors such as the random forest are known to have superior accuracy but their black-box predictions are difficult to interpret. In contrast, a generalized linear model (GLM) is very interpretable especially when forward feature selection is used to construct the model. However, forward feature selection tends to overfit the data and leads to low predictive accuracy. Therefore, it remains an important research goal to combine the advantages of ensemble predictors (high accuracy) with the advantages of forward regression modeling (interpretability). To address this goal several articles have explored GLM based ensemble predictors. Since limited evaluations suggested that these ensemble predictors were less accurate than alternative predictors, they have found little attention in the literature. Results Comprehensive evaluations involving hundreds of genomic data sets, the UCI machine learning benchmark data, and simulations are used to give GLM based ensemble predictors a new and careful look. A novel bootstrap aggregated (bagged) GLM predictor that incorporates several elements of randomness and instability (random subspace method, optional interaction terms, forward variable selection) often outperforms a host of alternative prediction methods including random forests and penalized regression models (ridge regression, elastic net, lasso). This random generalized linear model (RGLM) predictor provides variable importance measures that can be used to define a “thinned” ensemble predictor (involving few features) that retains excellent predictive accuracy. Conclusion RGLM is a state of the art predictor that shares the advantages of a random forest (excellent predictive accuracy, feature importance measures, out-of-bag estimates of accuracy) with those of a forward selected generalized linear model (interpretability). These methods are implemented in the freely available R software package randomGLM. PMID:23323760

  16. An accurate in vitro model of the E. coli envelope.

    PubMed

    Clifton, Luke A; Holt, Stephen A; Hughes, Arwel V; Daulton, Emma L; Arunmanee, Wanatchaporn; Heinrich, Frank; Khalid, Syma; Jefferies, Damien; Charlton, Timothy R; Webster, John R P; Kinane, Christian J; Lakey, Jeremy H

    2015-10-01

    Gram-negative bacteria are an increasingly serious source of antibiotic-resistant infections, partly owing to their characteristic protective envelope. This complex, 20 nm thick barrier includes a highly impermeable, asymmetric bilayer outer membrane (OM), which plays a pivotal role in resisting antibacterial chemotherapy. Nevertheless, the OM molecular structure and its dynamics are poorly understood because the structure is difficult to recreate or study in vitro. The successful formation and characterization of a fully asymmetric model envelope using Langmuir-Blodgett and Langmuir-Schaefer methods is now reported. Neutron reflectivity and isotopic labeling confirmed the expected structure and asymmetry and showed that experiments with antibacterial proteins reproduced published in vivo behavior. By closely recreating natural OM behavior, this model provides a much needed robust system for antibiotic development. PMID:26331292

  17. Accurate characterization of delay discounting: a multiple model approach using approximate Bayesian model selection and a unified discounting measure.

    PubMed

    Franck, Christopher T; Koffarnus, Mikhail N; House, Leanna L; Bickel, Warren K

    2015-01-01

    The study of delay discounting, or valuation of future rewards as a function of delay, has contributed to understanding the behavioral economics of addiction. Accurate characterization of discounting can be furthered by statistical model selection given that many functions have been proposed to measure future valuation of rewards. The present study provides a convenient Bayesian model selection algorithm that selects the most probable discounting model among a set of candidate models chosen by the researcher. The approach assigns the most probable model for each individual subject. Importantly, effective delay 50 (ED50) functions as a suitable unifying measure that is computable for and comparable between a number of popular functions, including both one- and two-parameter models. The combined model selection/ED50 approach is illustrated using empirical discounting data collected from a sample of 111 undergraduate students with models proposed by Laibson (1997); Mazur (1987); Myerson & Green (1995); Rachlin (2006); and Samuelson (1937). Computer simulation suggests that the proposed Bayesian model selection approach outperforms the single model approach when data truly arise from multiple models. When a single model underlies all participant data, the simulation suggests that the proposed approach fares no worse than the single model approach.

  18. Fast and Accurate Radiative Transfer Calculations Using Principal Component Analysis for (Exo-)Planetary Retrieval Models

    NASA Astrophysics Data System (ADS)

    Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.

    2015-12-01

    Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work

  19. Polyallelic structural variants can provide accurate, highly informative genetic markers focused on diagnosis and therapeutic targets: Accuracy vs. Precision.

    PubMed

    Roses, A D

    2016-02-01

    Structural variants (SVs) include all insertions, deletions, and rearrangements in the genome, with several common types of nucleotide repeats including single sequence repeats, short tandem repeats, and insertion-deletion length variants. Polyallelic SVs provide highly informative markers for association studies with well-phenotyped cohorts. SVs can influence gene regulation by affecting epigenetics, transcription, splicing, and/or translation. Accurate assays of polyallelic SV loci are required to define the range and allele frequency of variable length alleles. PMID:26517180

  20. HAAD: A quick algorithm for accurate prediction of hydrogen atoms in protein structures.

    PubMed

    Li, Yunqi; Roy, Ambrish; Zhang, Yang

    2009-08-20

    Hydrogen constitutes nearly half of all atoms in proteins and their positions are essential for analyzing hydrogen-bonding interactions and refining atomic-level structures. However, most protein structures determined by experiments or computer prediction lack hydrogen coordinates. We present a new algorithm, HAAD, to predict the positions of hydrogen atoms based on the positions of heavy atoms. The algorithm is built on the basic rules of orbital hybridization followed by the optimization of steric repulsion and electrostatic interactions. We tested the algorithm using three independent data sets: ultra-high-resolution X-ray structures, structures determined by neutron diffraction, and NOE proton-proton distances. Compared with the widely used programs CHARMM and REDUCE, HAAD has a significantly higher accuracy, with the average RMSD of the predicted hydrogen atoms to the X-ray and neutron diffraction structures decreased by 26% and 11%, respectively. Furthermore, hydrogen atoms placed by HAAD have more matches with the NOE restraints and fewer clashes with heavy atoms. The average CPU cost by HAAD is 18 and 8 times lower than that of CHARMM and REDUCE, respectively. The significant advantage of HAAD in both the accuracy and the speed of the hydrogen additions should make HAAD a useful tool for the detailed study of protein structure and function. Both an executable and the source code of HAAD are freely available at http://zhang.bioinformatics.ku.edu/HAAD.

  1. Generalized Multilevel Structural Equation Modeling

    ERIC Educational Resources Information Center

    Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew

    2004-01-01

    A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…

  2. Accurate calculations of the hydration free energies of druglike molecules using the reference interaction site model.

    PubMed

    Palmer, David S; Sergiievskyi, Volodymyr P; Jensen, Frank; Fedorov, Maxim V

    2010-07-28

    We report on the results of testing the reference interaction site model (RISM) for the estimation of the hydration free energy of druglike molecules. The optimum model was selected after testing of different RISM free energy expressions combined with different quantum mechanics and empirical force-field methods of structure optimization and atomic partial charge calculation. The final model gave a systematic error with a standard deviation of 2.6 kcal/mol for a test set of 31 molecules selected from the SAMPL1 blind challenge set [J. P. Guthrie, J. Phys. Chem. B 113, 4501 (2009)]. After parametrization of this model to include terms for the excluded volume and the number of atoms of different types in the molecule, the root mean squared error for a test set of 19 molecules was less than 1.2 kcal/mol.

  3. Fine structure in proton radioactivity: An accurate tool to ascertain the breaking of axial symmetry in {sup 145}Tm

    SciTech Connect

    Arumugam, P.; Ferreira, L. S.; Maglione, E.

    2008-10-15

    With a proper formalism for proton emission from triaxially deformed nuclei, we perform exact calculations of decay widths for the decays to ground and first excited 2{sup +} states in the daughter nucleus. Our results for rotational spectrum, decay width and fine structure in the case of the nucleus {sup 145}Tm lead for the first time to an accurate identification of triaxial deformation using proton emission. This work also puts in evidence the advantage of proton emission over the conventional probes to study nuclear structure at the proton drip-line.

  4. Accurate numerical forward model for optimal retracking of SIRAL2 SAR echoes over open ocean

    NASA Astrophysics Data System (ADS)

    Phalippou, L.; Demeestere, F.

    2011-12-01

    The SAR mode of SIRAL-2 on board Cryosat-2 has been designed to measure primarily sea-ice and continental ice (Wingham et al. 2005). In 2005, K. Raney (KR, 2005) pointed out the improvements brought by SAR altimeter for open ocean. KR results were mostly based on 'rule of thumb' considerations on speckle noise reduction due to the higher PRF and to speckle decorrelation after SAR processing. In 2007, Phalippou and Enjolras (PE,2007) provided the theoretical background for optimal retracking of SAR echoes over ocean with a focus on the forward modelling of the power-waveforms. The accuracies of geophysical parameters (range, significant wave heights, and backscattering coefficient) retrieved from SAR altimeter data were derived accounting for SAR echo shape and speckle noise accurate modelling. The step forward to optimal retracking using numerical forward model (NFM) was also pointed out. NFM of the power waveform avoids analytical approximation, a warranty to minimise the geophysical dependent biases in the retrieval. NFM have been used for many years, in operational meteorology in particular, for retrieving temperature and humidity profiles from IR and microwave radiometers as the radiative transfer function is complex (Eyre, 1989). So far this technique was not used in the field of ocean conventional altimetry as analytical models (e.g. Brown's model for instance) were found to give sufficient accuracy. However, although NFM seems desirable even for conventional nadir altimetry, it becomes inevitable if one wish to process SAR altimeter data as the transfer function is too complex to be approximated by a simple analytical function. This was clearly demonstrated in PE 2007. The paper describes the background to SAR data retracking over open ocean. Since PE 2007 improvements have been brought to the forward model and it is shown that the altimeter on-ground and in flight characterisation (e.g antenna pattern range impulse response, azimuth impulse response

  5. Towards more accurate isoscapes encouraging results from wine, water and marijuana data/model and model/model comparisons.

    NASA Astrophysics Data System (ADS)

    West, J. B.; Ehleringer, J. R.; Cerling, T.

    2006-12-01

    Understanding how the biosphere responds to change it at the heart of biogeochemistry, ecology, and other Earth sciences. The dramatic increase in human population and technological capacity over the past 200 years or so has resulted in numerous, simultaneous changes to biosphere structure and function. This, then, has lead to increased urgency in the scientific community to try to understand how systems have already responded to these changes, and how they might do so in the future. Since all biospheric processes exhibit some patchiness or patterns over space, as well as time, we believe that understanding the dynamic interactions between natural systems and human technological manipulations can be improved if these systems are studied in an explicitly spatial context. We present here results of some of our efforts to model the spatial variation in the stable isotope ratios (δ2H and δ18O) of plants over large spatial extents, and how these spatial model predictions compare to spatially explicit data. Stable isotopes trace and record ecological processes and as such, if modeled correctly over Earth's surface allow us insights into changes in biosphere states and processes across spatial scales. The data-model comparisons show good agreement, in spite of the remaining uncertainties (e.g., plant source water isotopic composition). For example, inter-annual changes in climate are recorded in wine stable isotope ratios. Also, a much simpler model of leaf water enrichment driven with spatially continuous global rasters of precipitation and climate normals largely agrees with complex GCM modeling that includes leaf water δ18O. Our results suggest that modeling plant stable isotope ratios across large spatial extents may be done with reasonable accuracy, including over time. These spatial maps, or isoscapes, can now be utilized to help understand spatially distributed data, as well as to help guide future studies designed to understand ecological change across

  6. Accurate Modeling of the Terrestrial Gamma-Ray Background for Homeland Security Applications

    SciTech Connect

    Sandness, Gerald A.; Schweppe, John E.; Hensley, Walter K.; Borgardt, James D.; Mitchell, Allison L.

    2009-10-24

    Abstract–The Pacific Northwest National Laboratory has developed computer models to simulate the use of radiation portal monitors to screen vehicles and cargo for the presence of illicit radioactive material. The gamma radiation emitted by the vehicles or cargo containers must often be measured in the presence of a relatively large gamma-ray background mainly due to the presence of potassium, uranium, and thorium (and progeny isotopes) in the soil and surrounding building materials. This large background is often a significant limit to the detection sensitivity for items of interest and must be modeled accurately for analyzing homeland security situations. Calculations of the expected gamma-ray emission from a disk of soil and asphalt were made using the Monte Carlo transport code MCNP and were compared to measurements made at a seaport with a high-purity germanium detector. Analysis revealed that the energy spectrum of the measured background could not be reproduced unless the model included gamma rays coming from the ground out to distances of at least 300 m. The contribution from beyond about 50 m was primarily due to gamma rays that scattered in the air before entering the detectors rather than passing directly from the ground to the detectors. These skyshine gamma rays contribute tens of percent to the total gamma-ray spectrum, primarily at energies below a few hundred keV. The techniques that were developed to efficiently calculate the contributions from a large soil disk and a large air volume in a Monte Carlo simulation are described and the implications of skyshine in portal monitoring applications are discussed.

  7. Modeling of Non-Gravitational Forces for Precise and Accurate Orbit Determination

    NASA Astrophysics Data System (ADS)

    Hackel, Stefan; Gisinger, Christoph; Steigenberger, Peter; Balss, Ulrich; Montenbruck, Oliver; Eineder, Michael

    2014-05-01

    Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The precise reconstruction of the satellite's trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency Integrated Geodetic and Occultation Receiver (IGOR) onboard the spacecraft. The increasing demand for precise radar products relies on validation methods, which require precise and accurate orbit products. An analysis of the orbit quality by means of internal and external validation methods on long and short timescales shows systematics, which reflect deficits in the employed force models. Following the proper analysis of this deficits, possible solution strategies are highlighted in the presentation. The employed Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for gravitational and non-gravitational forces. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). The satellite TerraSAR-X flies on a dusk-dawn orbit with an altitude of approximately 510 km above ground. Due to this constellation, the Sun almost constantly illuminates the satellite, which causes strong across-track accelerations on the plane rectangular to the solar rays. The indirect effect of the solar radiation is called Earth Radiation Pressure (ERP). This force depends on the sunlight, which is reflected by the illuminated Earth surface (visible spectra) and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed. The scope of

  8. Physical resist models and their calibration: their readiness for accurate EUV lithography simulation

    NASA Astrophysics Data System (ADS)

    Klostermann, U. K.; Mülders, T.; Schmöller, T.; Lorusso, G. F.; Hendrickx, E.

    2010-04-01

    In this paper, we discuss the performance of EUV resist models in terms of predictive accuracy, and we assess the readiness of the corresponding model calibration methodology. The study is done on an extensive OPC data set collected at IMEC for the ShinEtsu resist SEVR-59 on the ASML EUV Alpha Demo Tool (ADT), with the data set including more than thousand CD values. We address practical aspects such as the speed of calibration and selection of calibration patterns. The model is calibrated on 12 process window data series varying in pattern width (32, 36, 40 nm), orientation (H, V) and pitch (dense, isolated). The minimum measured feature size at nominal process condition is a 32 nm CD at a dense pitch of 64 nm. Mask metrology is applied to verify and eventually correct nominal width of the drawn CD. Cross-sectional SEM information is included in the calibration to tune the simulated resist loss and sidewall angle. The achieved calibration RMS is ~ 1.0 nm. We show what elements are important to obtain a well calibrated model. We discuss the impact of 3D mask effects on the Bossung tilt. We demonstrate that a correct representation of the flare level during the calibration is important to achieve a high predictability at various flare conditions. Although the model calibration is performed on a limited subset of the measurement data (one dimensional structures only), its accuracy is validated based on a large number of OPC patterns (at nominal dose and focus conditions) not included in the calibration; validation RMS results as small as 1 nm can be reached. Furthermore, we study the model's extendibility to two-dimensional end of line (EOL) structures. Finally, we correlate the experimentally observed fingerprint of the CD uniformity to a model, where EUV tool specific signatures are taken into account.

  9. Structural model of channelrhodopsin.

    PubMed

    Watanabe, Hiroshi C; Welke, Kai; Schneider, Franziska; Tsunoda, Satoshi; Zhang, Feng; Deisseroth, Karl; Hegemann, Peter; Elstner, Marcus

    2012-03-01

    Channelrhodopsins (ChRs) are light-gated cation channels that mediate ion transport across membranes in microalgae (vectorial catalysis). ChRs are now widely used for the analysis of neural networks in tissues and living animals with light (optogenetics). For elucidation of functional mechanisms at the atomic level, as well as for further engineering and application, a detailed structure is urgently needed. In the absence of an experimental structure, here we develop a structural ChR model based on several molecular computational approaches, capitalizing on characteristic patterns in amino acid sequences of ChR1, ChR2, Volvox ChRs, Mesostigma ChR, and the recently identified ChR of the halophilic alga Dunaliella salina. In the present model, we identify remarkable structural motifs that may explain fundamental electrophysiological properties of ChR2, ChR1, and their mutants, and in a crucial validation of the model, we successfully reproduce the excitation energy predicted by absorption spectra. PMID:22241469

  10. Accurate prediction of the refractive index of polymers using first principles and data modeling

    NASA Astrophysics Data System (ADS)

    Afzal, Mohammad Atif Faiz; Cheng, Chong; Hachmann, Johannes

    Organic polymers with a high refractive index (RI) have recently attracted considerable interest due to their potential application in optical and optoelectronic devices. The ability to tailor the molecular structure of polymers is the key to increasing the accessible RI values. Our work concerns the creation of predictive in silico models for the optical properties of organic polymers, the screening of large-scale candidate libraries, and the mining of the resulting data to extract the underlying design principles that govern their performance. This work was set up to guide our experimentalist partners and allow them to target the most promising candidates. Our model is based on the Lorentz-Lorenz equation and thus includes the polarizability and number density values for each candidate. For the former, we performed a detailed benchmark study of different density functionals, basis sets, and the extrapolation scheme towards the polymer limit. For the number density we devised an exceedingly efficient machine learning approach to correlate the polymer structure and the packing fraction in the bulk material. We validated the proposed RI model against the experimentally known RI values of 112 polymers. We could show that the proposed combination of physical and data modeling is both successful and highly economical to characterize a wide range of organic polymers, which is a prerequisite for virtual high-throughput screening.

  11. CRYSpred: accurate sequence-based protein crystallization propensity prediction using sequence-derived structural characteristics.

    PubMed

    Mizianty, Marcin J; Kurgan, Lukasz A

    2012-01-01

    Relatively low success rates of X-ray crystallography, which is the most popular method for solving proteins structures, motivate development of novel methods that support selection of tractable protein targets. This aspect is particularly important in the context of the current structural genomics efforts that allow for a certain degree of flexibility in the target selection. We propose CRYSpred, a novel in-silico crystallization propensity predictor that uses a set of 15 novel features which utilize a broad range of inputs including charge, hydrophobicity, and amino acid composition derived from the protein chain, and the solvent accessibility and disorder predicted from the protein sequence. Our method outperforms seven modern crystallization propensity predictors on three, independent from training dataset, benchmark test datasets. The strong predictive performance offered by the CRYSpred is attributed to the careful design of the features, utilization of the comprehensive set of inputs, and the usage of the Support Vector Machine classifier. The inputs utilized by CRYSpred are well-aligned with the existing rules-of-thumb that are used in the structural genomics studies. PMID:21919861

  12. CRYSpred: accurate sequence-based protein crystallization propensity prediction using sequence-derived structural characteristics.

    PubMed

    Mizianty, Marcin J; Kurgan, Lukasz A

    2012-01-01

    Relatively low success rates of X-ray crystallography, which is the most popular method for solving proteins structures, motivate development of novel methods that support selection of tractable protein targets. This aspect is particularly important in the context of the current structural genomics efforts that allow for a certain degree of flexibility in the target selection. We propose CRYSpred, a novel in-silico crystallization propensity predictor that uses a set of 15 novel features which utilize a broad range of inputs including charge, hydrophobicity, and amino acid composition derived from the protein chain, and the solvent accessibility and disorder predicted from the protein sequence. Our method outperforms seven modern crystallization propensity predictors on three, independent from training dataset, benchmark test datasets. The strong predictive performance offered by the CRYSpred is attributed to the careful design of the features, utilization of the comprehensive set of inputs, and the usage of the Support Vector Machine classifier. The inputs utilized by CRYSpred are well-aligned with the existing rules-of-thumb that are used in the structural genomics studies.

  13. Condensed Antenna Structural Models for Dynamics Analysis

    NASA Technical Reports Server (NTRS)

    Levy, R.

    1985-01-01

    Condensed degree-of-freedom models are compared with large degree-of-freedom finite-element models of a representative antenna-tipping and alidade structure, for both locked and free-rotor configurations. It is shown that: (1) the effective-mass models accurately reproduce the lower-mode natural frequencies of the finite element model; (2) frequency responses for the two types of models are in agreement up to at least 16 rad/s for specific points; and (3) transient responses computed for the same points are in good agreement. It is concluded that the effective-mass model, which best represents the five lower modes of the finite-element model, is a sufficient representation of the structure for future incorporation with a total servo control structure dynamic simulation.

  14. Stable, accurate and efficient computation of normal modes for horizontal stratified models

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Chen, Xiaofei

    2016-06-01

    We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of "family of secular functions" that we herein call "adaptive mode observers", is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of "turning point", our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.

  15. Stable, accurate and efficient computation of normal modes for horizontal stratified models

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Chen, Xiaofei

    2016-08-01

    We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of `family of secular functions' that we herein call `adaptive mode observers' is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of `turning point', our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.

  16. Precise and accurate assessment of uncertainties in model parameters from stellar interferometry. Application to stellar diameters

    NASA Astrophysics Data System (ADS)

    Lachaume, Regis; Rabus, Markus; Jordan, Andres

    2015-08-01

    In stellar interferometry, the assumption that the observables can be seen as Gaussian, independent variables is the norm. In particular, neither the optical interferometry FITS (OIFITS) format nor the most popular fitting software in the field, LITpro, offer means to specify a covariance matrix or non-Gaussian uncertainties. Interferometric observables are correlated by construct, though. Also, the calibration by an instrumental transfer function ensures that the resulting observables are not Gaussian, even if uncalibrated ones happened to be so.While analytic frameworks have been published in the past, they are cumbersome and there is no generic implementation available. We propose here a relatively simple way of dealing with correlated errors without the need to extend the OIFITS specification or making some Gaussian assumptions. By repeatedly picking at random which interferograms, which calibrator stars, and which are the errors on their diameters, and performing the data processing on the bootstrapped data, we derive a sampling of p(O), the multivariate probability density function (PDF) of the observables O. The results can be stored in a normal OIFITS file. Then, given a model m with parameters P predicting observables O = m(P), we can estimate the PDF of the model parameters f(P) = p(m(P)) by using a density estimation of the observables' PDF p.With observations repeated over different baselines, on nights several days apart, and with a significant set of calibrators systematic errors are de facto taken into account. We apply the technique to a precise and accurate assessment of stellar diameters obtained at the Very Large Telescope Interferometer with PIONIER.

  17. Towards more accurate wind and solar power prediction by improving NWP model physics

    NASA Astrophysics Data System (ADS)

    Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo

    2014-05-01

    nighttime to well mixed conditions during the day presents a big challenge to NWP models. Fast decrease and successive increase in hub-height wind speed after sunrise, and the formation of nocturnal low level jets will be discussed. For PV, the life cycle of low stratus clouds and fog is crucial. Capturing these processes correctly depends on the accurate simulation of diffusion or vertical momentum transport and the interaction with other atmospheric and soil processes within the numerical weather model. Results from Single Column Model simulations and 3d case studies will be presented. Emphasis is placed on wind forecasts; however, some references to highlights concerning the PV-developments will also be given. *) ORKA: Optimierung von Ensembleprognosen regenerativer Einspeisung für den Kürzestfristbereich am Anwendungsbeispiel der Netzsicherheitsrechnungen **) EWeLiNE: Erstellung innovativer Wetter- und Leistungsprognosemodelle für die Netzintegration wetterabhängiger Energieträger, www.projekt-eweline.de

  18. The Dirac equation in electronic structure calculations: Accurate evaluation of DFT predictions for actinides

    SciTech Connect

    Wills, John M; Mattsson, Ann E

    2012-06-06

    Brooks, Johansson, and Skriver, using the LMTO-ASA method and considerable insight, were able to explain many of the ground state properties of the actinides. In the many years since this work was done, electronic structure calculations of increasing sophistication have been applied to actinide elements and compounds, attempting to quantify the applicability of DFT to actinides and actinide compounds and to try to incorporate other methodologies (i.e. DMFT) into DFT calculations. Through these calculations, the limits of both available density functionals and ad hoc methodologies are starting to become clear. However, it has also become clear that approximations used to incorporate relativity are not adequate to provide rigorous tests of the underlying equations of DFT, not to mention ad hoc additions. In this talk, we describe the result of full-potential LMTO calculations for the elemental actinides, comparing results obtained with a full Dirac basis with those obtained from scalar-relativistic bases, with and without variational spin-orbit. This comparison shows that the scalar relativistic treatment of actinides does not have sufficient accuracy to provide a rigorous test of theory and that variational spin-orbit introduces uncontrolled errors in the results of electronic structure calculations on actinide elements.

  19. Bayesian Lasso for Semiparametric Structural Equation Models

    PubMed Central

    Guo, Ruixin; Zhu, Hongtu; Chow, Sy-Miin; Ibrahim, Joseph G.

    2011-01-01

    Summary There has been great interest in developing nonlinear structural equation models and associated statistical inference procedures, including estimation and model selection methods. In this paper a general semiparametric structural equation model (SSEM) is developed in which the structural equation is composed of nonparametric functions of exogenous latent variables and fixed covariates on a set of latent endogenous variables. A basis representation is used to approximate these nonparametric functions in the structural equation and the Bayesian Lasso method coupled with a Markov Chain Monte Carlo (MCMC) algorithm is used for simultaneous estimation and model selection. The proposed method is illustrated using a simulation study and data from the Affective Dynamics and Individual Differences (ADID) study. Results demonstrate that our method can accurately estimate the unknown parameters and correctly identify the true underlying model. PMID:22376150

  20. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    SciTech Connect

    Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang E-mail: jing.xiong@siat.ac.cn; Hu, Ying; Xiong, Jing E-mail: jing.xiong@siat.ac.cn; Zhang, Jianwei

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  1. Three dimensional printing as an effective method of producing anatomically accurate models for studies in thermal ecology.

    PubMed

    Watson, Charles M; Francis, Gamal R

    2015-07-01

    Hollow copper models painted to match the reflectance of the animal subject are standard in thermal ecology research. While the copper electroplating process results in accurate models, it is relatively time consuming, uses caustic chemicals, and the models are often anatomically imprecise. Although the decreasing cost of 3D printing can potentially allow the reproduction of highly accurate models, the thermal performance of 3D printed models has not been evaluated. We compared the cost, accuracy, and performance of both copper and 3D printed lizard models and found that the performance of the models were statistically identical in both open and closed habitats. We also find that 3D models are more standard, lighter, durable, and inexpensive, than the copper electroformed models. PMID:25965016

  2. A FIB-nanotomography method for accurate 3D reconstruction of open nanoporous structures.

    PubMed

    Mangipudi, K R; Radisch, V; Holzer, L; Volkert, C A

    2016-04-01

    We present an automated focused ion beam nanotomography method for nanoporous microstructures with open porosity, and apply it to reconstruct nanoporous gold (np-Au) structures with ligament sizes on the order of a few tens of nanometers. This method uses serial sectioning of a well-defined wedge-shaped geometry to determine the thickness of individual slices from the changes in the sample width in successive cross-sectional images. The pore space of a selected region of the np-Au is infiltrated with ion-beam-deposited Pt composite before serial sectioning. The cross-sectional images are binarized and stacked according to the individual slice thicknesses, and then processed using standard reconstruction methods. For the image conditions and sample geometry used here, we are able to determine the thickness of individual slices with an accuracy much smaller than a pixel. The accuracy of the new method based on actual slice thickness is assessed by comparing it with (i) a reconstruction using the same cross-sectional images but assuming a constant slice thickness, and (ii) a reconstruction using traditional FIB-tomography method employing constant slice thickness. The morphology and topology of the structures are characterized using ligament and pore size distributions, interface shape distribution functions, interface normal distributions, and genus. The results suggest that the morphology and topology of the final reconstructions are significantly influenced when a constant slice thickness is assumed. The study reveals grain-to-grain variations in the morphology and topology of np-Au. PMID:26906523

  3. Towards accurate kinetic modeling of prompt NO formation in hydrocarbon flames via the NCN pathway

    SciTech Connect

    Sutton, Jeffrey A.; Fleming, James W.

    2008-08-15

    A basic kinetic mechanism that can predict the appropriate prompt-NO precursor NCN, as shown by experiment, with relative accuracy while still producing postflame NO results that can be calculated as accurately as or more accurately than through the former HCN pathway is presented for the first time. The basic NCN submechanism should be a starting point for future NCN kinetic and prompt NO formation refinement.

  4. A Fibre-Reinforced Poroviscoelastic Model Accurately Describes the Biomechanical Behaviour of the Rat Achilles Tendon

    PubMed Central

    Heuijerjans, Ashley; Matikainen, Marko K.; Julkunen, Petro; Eliasson, Pernilla; Aspenberg, Per; Isaksson, Hanna

    2015-01-01

    Background Computational models of Achilles tendons can help understanding how healthy tendons are affected by repetitive loading and how the different tissue constituents contribute to the tendon’s biomechanical response. However, available models of Achilles tendon are limited in their description of the hierarchical multi-structural composition of the tissue. This study hypothesised that a poroviscoelastic fibre-reinforced model, previously successful in capturing cartilage biomechanical behaviour, can depict the biomechanical behaviour of the rat Achilles tendon found experimentally. Materials and Methods We developed a new material model of the Achilles tendon, which considers the tendon’s main constituents namely: water, proteoglycan matrix and collagen fibres. A hyperelastic formulation of the proteoglycan matrix enabled computations of large deformations of the tendon, and collagen fibres were modelled as viscoelastic. Specimen-specific finite element models were created of 9 rat Achilles tendons from an animal experiment and simulations were carried out following a repetitive tensile loading protocol. The material model parameters were calibrated against data from the rats by minimising the root mean squared error (RMS) between experimental force data and model output. Results and Conclusions All specimen models were successfully fitted to experimental data with high accuracy (RMS 0.42-1.02). Additional simulations predicted more compliant and soft tendon behaviour at reduced strain-rates compared to higher strain-rates that produce a stiff and brittle tendon response. Stress-relaxation simulations exhibited strain-dependent stress-relaxation behaviour where larger strains produced slower relaxation rates compared to smaller strain levels. Our simulations showed that the collagen fibres in the Achilles tendon are the main load-bearing component during tensile loading, where the orientation of the collagen fibres plays an important role for the tendon

  5. Nuclear structure with accurate chiral perturbation theory nucleon-nucleon potential: Application to 6Li and 10B

    SciTech Connect

    Navratil, P; Caurier, E

    2003-10-14

    The authors calculate properties of A = 6 system using the accurate charge-dependent nucleon-nucleon (NN) potential at fourth order of chiral perturbation theory. By application of the ab initio no-core shell model (NCSM) and a variational calculation in the harmonic oscillator basis with basis size up to 16 {h_bar}{Omega} they obtain the {sup 6}Li binding energy of 28.5(5) MeV and a converged excitation spectrum. Also, they calculate properties of {sup 10}B using the same NN potential in a basis space of up to 8 {h_bar}{Omega}. The results are consistent with results obtained by standard accurate NN potentials and demonstrate a deficiency of Hamiltonians consisting of only two-body terms. At this order of chiral perturbation theory three-body terms appear. It is expected that inclusion of such terms in the Hamiltonian will improve agreement with experiment.

  6. Towards an accurate model of the redshift-space clustering of haloes in the quasi-linear regime

    NASA Astrophysics Data System (ADS)

    Reid, Beth A.; White, Martin

    2011-11-01

    Observations of redshift-space distortions in spectroscopic galaxy surveys offer an attractive method for measuring the build-up of cosmological structure, which depends both on the expansion rate of the Universe and on our theory of gravity. The statistical precision with which redshift-space distortions can now be measured demands better control of our theoretical systematic errors. While many recent studies focus on understanding dark matter clustering in redshift space, galaxies occupy special places in the universe: dark matter haloes. In our detailed study of halo clustering and velocity statistics in 67.5 h-3 Gpc3 of N-body simulations, we uncover a complex dependence of redshift-space clustering on halo bias. We identify two distinct corrections which affect the halo redshift-space correlation function on quasi-linear scales (˜30-80 h-1 Mpc): the non-linear mapping between real-space and redshift-space positions, and the non-linear suppression of power in the velocity divergence field. We model the first non-perturbatively using the scale-dependent Gaussian streaming model, which we show is accurate at the <0.5 (2) per cent level in transforming real-space clustering and velocity statistics into redshift space on scales s > 10 (s > 25) h-1 Mpc for the monopole (quadrupole) halo correlation functions. The dominant correction to the Kaiser limit in this model scales like b3. We use standard perturbation theory to predict the real-space pairwise halo velocity statistics. Our fully analytic model is accurate at the 2 per cent level only on scales s > 40 h-1 Mpc for the range of halo masses we studied (with b= 1.4-2.8). We find that recent models of halo redshift-space clustering that neglect the corrections from the bispectrum and higher order terms from the non-linear real-space to redshift-space mapping will not have the accuracy required for current and future observational analyses. Finally, we note that our simulation results confirm the essential but non

  7. CC/DFT Route toward Accurate Structures and Spectroscopic Features for Observed and Elusive Conformers of Flexible Molecules: Pyruvic Acid as a Case Study.

    PubMed

    Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Cimino, Paola; Penocchio, Emanuele; Puzzarini, Cristina

    2015-09-01

    The structures and relative stabilities as well as the rotational and vibrational spectra of the three low-energy conformers of pyruvic acid (PA) have been characterized using a state-of-the-art quantum-mechanical approach designed for flexible molecules. By making use of the available experimental rotational constants for several isotopologues of the most stable PA conformer, Tc-PA, the semiexperimental equilibrium structure has been derived. The latter provides a reference for the pure theoretical determination of the equilibrium geometries for all conformers, thus confirming for these structures an accuracy of 0.001 Å and 0.1 deg for bond lengths and angles, respectively. Highly accurate relative energies of all conformers (Tc-, Tt-, and Ct-PA) and of the transition states connecting them are provided along with the thermodynamic properties at low and high temperatures, thus leading to conformational enthalpies accurate to 1 kJ mol(-1). Concerning microwave spectroscopy, rotational constants accurate to about 20 MHz are provided for the Tt- and Ct-PA conformers, together with the computed centrifugal-distortion constants and dipole moments required to simulate their rotational spectra. For Ct-PA, vibrational frequencies in the mid-infrared region accurate to 10 cm(-1) are reported along with theoretical estimates for the transitions in the near-infrared range, and the corresponding infrared spectrum including fundamental transitions, overtones, and combination bands has been simulated. In addition to the new data described above, theoretical results for the Tc- and Tt-PA conformers are compared with all available experimental data to further confirm the accuracy of the hybrid coupled-cluster/density functional theory (CC/DFT) protocol applied in the present study. Finally, we discuss in detail the accuracy of computational models fully based on double-hybrid DFT functionals (mainly at the B2PLYP/aug-cc-pVTZ level) that avoid the use of very expensive CC

  8. CC/DFT Route toward Accurate Structures and Spectroscopic Features for Observed and Elusive Conformers of Flexible Molecules: Pyruvic Acid as a Case Study.

    PubMed

    Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Cimino, Paola; Penocchio, Emanuele; Puzzarini, Cristina

    2015-09-01

    The structures and relative stabilities as well as the rotational and vibrational spectra of the three low-energy conformers of pyruvic acid (PA) have been characterized using a state-of-the-art quantum-mechanical approach designed for flexible molecules. By making use of the available experimental rotational constants for several isotopologues of the most stable PA conformer, Tc-PA, the semiexperimental equilibrium structure has been derived. The latter provides a reference for the pure theoretical determination of the equilibrium geometries for all conformers, thus confirming for these structures an accuracy of 0.001 Å and 0.1 deg for bond lengths and angles, respectively. Highly accurate relative energies of all conformers (Tc-, Tt-, and Ct-PA) and of the transition states connecting them are provided along with the thermodynamic properties at low and high temperatures, thus leading to conformational enthalpies accurate to 1 kJ mol(-1). Concerning microwave spectroscopy, rotational constants accurate to about 20 MHz are provided for the Tt- and Ct-PA conformers, together with the computed centrifugal-distortion constants and dipole moments required to simulate their rotational spectra. For Ct-PA, vibrational frequencies in the mid-infrared region accurate to 10 cm(-1) are reported along with theoretical estimates for the transitions in the near-infrared range, and the corresponding infrared spectrum including fundamental transitions, overtones, and combination bands has been simulated. In addition to the new data described above, theoretical results for the Tc- and Tt-PA conformers are compared with all available experimental data to further confirm the accuracy of the hybrid coupled-cluster/density functional theory (CC/DFT) protocol applied in the present study. Finally, we discuss in detail the accuracy of computational models fully based on double-hybrid DFT functionals (mainly at the B2PLYP/aug-cc-pVTZ level) that avoid the use of very expensive CC

  9. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    SciTech Connect

    Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.

    2015-01-01

    The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  10. Morphometric analysis of Russian Plain's small lakes on the base of accurate digital bathymetric models

    NASA Astrophysics Data System (ADS)

    Naumenko, Mikhail; Guzivaty, Vadim; Sapelko, Tatiana

    2016-04-01

    Lake morphometry refers to physical factors (shape, size, structure, etc) that determine the lake depression. Morphology has a great influence on lake ecological characteristics especially on water thermal conditions and mixing depth. Depth analyses, including sediment measurement at various depths, volumes of strata and shoreline characteristics are often critical to the investigation of biological, chemical and physical properties of fresh waters as well as theoretical retention time. Management techniques such as loading capacity for effluents and selective removal of undesirable components of the biota are also dependent on detailed knowledge of the morphometry and flow characteristics. During the recent years a lake bathymetric surveys were carried out by using echo sounder with a high bottom depth resolution and GPS coordinate determination. Few digital bathymetric models have been created with 10*10 m spatial grid for some small lakes of Russian Plain which the areas not exceed 1-2 sq. km. The statistical characteristics of the depth and slopes distribution of these lakes calculated on an equidistant grid. It will provide the level-surface-volume variations of small lakes and reservoirs, calculated through combination of various satellite images. We discuss the methodological aspects of creating of morphometric models of depths and slopes of small lakes as well as the advantages of digital models over traditional methods.

  11. Voronoi-cell finite difference method for accurate electronic structure calculation of polyatomic molecules on unstructured grids

    SciTech Connect

    Son, Sang-Kil

    2011-03-01

    We introduce a new numerical grid-based method on unstructured grids in the three-dimensional real-space to investigate the electronic structure of polyatomic molecules. The Voronoi-cell finite difference (VFD) method realizes a discrete Laplacian operator based on Voronoi cells and their natural neighbors, featuring high adaptivity and simplicity. To resolve multicenter Coulomb singularity in all-electron calculations of polyatomic molecules, this method utilizes highly adaptive molecular grids which consist of spherical atomic grids. It provides accurate and efficient solutions for the Schroedinger equation and the Poisson equation with the all-electron Coulomb potentials regardless of the coordinate system and the molecular symmetry. For numerical examples, we assess accuracy of the VFD method for electronic structures of one-electron polyatomic systems, and apply the method to the density-functional theory for many-electron polyatomic molecules.

  12. TRIM—3D: a three-dimensional model for accurate simulation of shallow water flow

    USGS Publications Warehouse

    Casulli, Vincenzo; Bertolazzi, Enrico; Cheng, Ralph T.

    1993-01-01

    A semi-implicit finite difference formulation for the numerical solution of three-dimensional tidal circulation is discussed. The governing equations are the three-dimensional Reynolds equations in which the pressure is assumed to be hydrostatic. A minimal degree of implicitness has been introduced in the finite difference formula so that the resulting algorithm permits the use of large time steps at a minimal computational cost. This formulation includes the simulation of flooding and drying of tidal flats, and is fully vectorizable for an efficient implementation on modern vector computers. The high computational efficiency of this method has made it possible to provide the fine details of circulation structure in complex regions that previous studies were unable to obtain. For proper interpretation of the model results suitable interactive graphics is also an essential tool.

  13. Affine-response model of molecular solvation of ions: Accurate predictions of asymmetric charging free energies

    PubMed Central

    Bardhan, Jaydeep P.; Jungwirth, Pavel; Makowski, Lee

    2012-01-01

    Two mechanisms have been proposed to drive asymmetric solvent response to a solute charge: a static potential contribution similar to the liquid-vapor potential, and a steric contribution associated with a water molecule's structure and charge distribution. In this work, we use free-energy perturbation molecular-dynamics calculations in explicit water to show that these mechanisms act in complementary regimes; the large static potential (∼44 kJ/mol/e) dominates asymmetric response for deeply buried charges, and the steric contribution dominates for charges near the solute-solvent interface. Therefore, both mechanisms must be included in order to fully account for asymmetric solvation in general. Our calculations suggest that the steric contribution leads to a remarkable deviation from the popular “linear response” model in which the reaction potential changes linearly as a function of charge. In fact, the potential varies in a piecewise-linear fashion, i.e., with different proportionality constants depending on the sign of the charge. This discrepancy is significant even when the charge is completely buried, and holds for solutes larger than single atoms. Together, these mechanisms suggest that implicit-solvent models can be improved using a combination of affine response (an offset due to the static potential) and piecewise-linear response (due to the steric contribution). PMID:23020318

  14. Integrative structural annotation of de novo RNA-Seq provides an accurate reference gene set of the enormous genome of the onion (Allium cepa L.).

    PubMed

    Kim, Seungill; Kim, Myung-Shin; Kim, Yong-Min; Yeom, Seon-In; Cheong, Kyeongchae; Kim, Ki-Tae; Jeon, Jongbum; Kim, Sunggil; Kim, Do-Sun; Sohn, Seong-Han; Lee, Yong-Hwan; Choi, Doil

    2015-02-01

    The onion (Allium cepa L.) is one of the most widely cultivated and consumed vegetable crops in the world. Although a considerable amount of onion transcriptome data has been deposited into public databases, the sequences of the protein-coding genes are not accurate enough to be used, owing to non-coding sequences intermixed with the coding sequences. We generated a high-quality, annotated onion transcriptome from de novo sequence assembly and intensive structural annotation using the integrated structural gene annotation pipeline (ISGAP), which identified 54,165 protein-coding genes among 165,179 assembled transcripts totalling 203.0 Mb by eliminating the intron sequences. ISGAP performed reliable annotation, recognizing accurate gene structures based on reference proteins, and ab initio gene models of the assembled transcripts. Integrative functional annotation and gene-based SNP analysis revealed a whole biological repertoire of genes and transcriptomic variation in the onion. The method developed in this study provides a powerful tool for the construction of reference gene sets for organisms based solely on de novo transcriptome data. Furthermore, the reference genes and their variation described here for the onion represent essential tools for molecular breeding and gene cloning in Allium spp.

  15. Efficient and accurate linear algebraic methods for large-scale electronic structure calculations with nonorthogonal atomic orbitals

    NASA Astrophysics Data System (ADS)

    Teng, H.; Fujiwara, T.; Hoshi, T.; Sogabe, T.; Zhang, S.-L.; Yamamoto, S.

    2011-04-01

    The need for large-scale electronic structure calculations arises recently in the field of material physics, and efficient and accurate algebraic methods for large simultaneous linear equations become greatly important. We investigate the generalized shifted conjugate orthogonal conjugate gradient method, the generalized Lanczos method, and the generalized Arnoldi method. They are the solver methods of large simultaneous linear equations of the one-electron Schrödinger equation and map the whole Hilbert space to a small subspace called the Krylov subspace. These methods are applied to systems of fcc Au with the NRL tight-binding Hamiltonian [F. Kirchhoff , Phys. Rev. BJCOMEL1098-012110.1103/PhysRevB.63.195101 63, 195101 (2001)]. We compare results by these methods and the exact calculation and show them to be equally accurate. The system size dependence of the CPU time is also discussed. The generalized Lanczos method and the generalized Arnoldi method are the most suitable for the large-scale molecular dynamics simulations from the viewpoint of CPU time and memory size.

  16. Extraction of accurate structure-factor amplitudes from Laue data: wavelength normalization with wiggler and undulator X-ray sources.

    PubMed

    Srajer, V; Crosson, S; Schmidt, M; Key, J; Schotte, F; Anderson, S; Perman, B; Ren, Z; Teng, T Y; Bourgeois, D; Wulff, M; Moffat, K

    2000-07-01

    Wavelength normalization is an essential part of processing of Laue X-ray diffraction data and is critically important for deriving accurate structure-factor amplitudes. The results of wavelength normalization for Laue data obtained in nanosecond time-resolved experiments at the ID09 beamline at the European Synchrotron Radiation Facility, Grenoble, France, are presented. Several wiggler and undulator insertion devices with complex spectra were used. The results show that even in the most challenging cases, such as wiggler/undulator tandems or single-line undulators, accurate wavelength normalization does not require unusually redundant Laue data and can be accomplished using typical Laue data sets. Single-line undulator spectra derived from Laue data compare well with the measured incident X-ray spectra. Successful wavelength normalization of the undulator data was also confirmed by the observed signal in nanosecond time-resolved experiments. Single-line undulators, which are attractive for time-resolved experiments due to their high peak intensity and low polychromatic background, are compared with wigglers, based on data obtained on the same crystal. PMID:16609201

  17. Millimeter-wave interferometry: an attractive technique for fast and accurate sensing of civil and mechanical structures

    NASA Astrophysics Data System (ADS)

    Kim, Seoktae; Nguyen, Cam

    2014-04-01

    This paper discusses the RF interferometry at millimeter-wave frequencies for sensing applications and reports the development of a millimeter-wave interferometric sensor operating around 35 GHz. The sensor is completely realized using microwave integrated circuits (MICs) and microwave monolithic integrated circuits (MMICs). It has been used for various sensing including displacement and velocity measurement. The sensor achieves a resolution and maximum error of only 10 μm and 27 μm, respectively, for displacement sensing and can measure velocity as low as 27.7 mm/s with a resolution about 2.7mm/s. Quick response and accurate sensing, as demonstrated by the developed millimeter-wave interferometric sensor, make the millimeter-wave interferometry attractive for sensing of various civil and mechanical structures.

  18. Finite Element Modelling of a Field-Sensed Magnetic Suspended System for Accurate Proximity Measurement Based on a Sensor Fusion Algorithm with Unscented Kalman Filter.

    PubMed

    Chowdhury, Amor; Sarjaš, Andrej

    2016-01-01

    The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation. PMID:27649197

  19. Finite Element Modelling of a Field-Sensed Magnetic Suspended System for Accurate Proximity Measurement Based on a Sensor Fusion Algorithm with Unscented Kalman Filter

    PubMed Central

    Chowdhury, Amor; Sarjaš, Andrej

    2016-01-01

    The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation. PMID:27649197

  20. Finite Element Modelling of a Field-Sensed Magnetic Suspended System for Accurate Proximity Measurement Based on a Sensor Fusion Algorithm with Unscented Kalman Filter.

    PubMed

    Chowdhury, Amor; Sarjaš, Andrej

    2016-09-15

    The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation.

  1. Regolith-structure modeling

    NASA Astrophysics Data System (ADS)

    Ko, Hon-Yim; Sture, Stein

    1991-11-01

    shielding for habitation and workspace. The habitat module is treated as a rigid cylindrical tube with a smooth exterior. By making the cylinder rigid, a complex interaction problem is reduced to a situation where we can consider the support regolith and the shielding regolith as behaving independently of the structural properties of the cylindrical structure. Medium-dense lunar simulant was placed around a scaled model of the habitat module to provide a radiation shield. This embankment-type shield was constructed in relatively thin but fine layers by compacting, by mechanical vibratory means, layer upon layer of simulant placed adjacent to the horizontally-aligned cylinder. model described above was studied in a geotechnical centrifuge, which allows for the scaling of model dimensions to prototype dimensions by increasing the acceleration of gravity on the model. The deformation response can be scaled up to prototype dimensions to provide an assessment of the deformation patterns of the lunar structure. The actual process of local and/or global growth of instabilities or skip planes can also be observed.

  2. Accurate Monte Carlo modeling of cyclotrons for optimization of shielding and activation calculations in the biomedical field

    NASA Astrophysics Data System (ADS)

    Infantino, Angelo; Marengo, Mario; Baschetti, Serafina; Cicoria, Gianfranco; Longo Vaschetto, Vittorio; Lucconi, Giulia; Massucci, Piera; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano

    2015-11-01

    Biomedical cyclotrons for production of Positron Emission Tomography (PET) radionuclides and radiotherapy with hadrons or ions are widely diffused and established in hospitals as well as in industrial facilities and research sites. Guidelines for site planning and installation, as well as for radiation protection assessment, are given in a number of international documents; however, these well-established guides typically offer analytic methods of calculation of both shielding and materials activation, in approximate or idealized geometry set up. The availability of Monte Carlo codes with accurate and up-to-date libraries for transport and interactions of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of nowadays computers, makes systematic use of simulations with realistic geometries possible, yielding equipment and site specific evaluation of the source terms, shielding requirements and all quantities relevant to radiation protection. In this work, the well-known Monte Carlo code FLUKA was used to simulate two representative models of cyclotron for PET radionuclides production, including their targetry; and one type of proton therapy cyclotron including the energy selection system. Simulations yield estimates of various quantities of radiological interest, including the effective dose distribution around the equipment, the effective number of neutron produced per incident proton and the activation of target materials, the structure of the cyclotron, the energy degrader, the vault walls and the soil. The model was validated against experimental measurements and comparison with well-established reference data. Neutron ambient dose equivalent H*(10) was measured around a GE PETtrace cyclotron: an average ratio between experimental measurement and simulations of 0.99±0.07 was found. Saturation yield of 18F, produced by the well-known 18O(p,n)18F reaction, was calculated and compared with the IAEA recommended

  3. Surface electron density models for accurate ab initio molecular dynamics with electronic friction

    NASA Astrophysics Data System (ADS)

    Novko, D.; Blanco-Rey, M.; Alducin, M.; Juaristi, J. I.

    2016-06-01

    Ab initio molecular dynamics with electronic friction (AIMDEF) is a valuable methodology to study the interaction of atomic particles with metal surfaces. This method, in which the effect of low-energy electron-hole (e-h) pair excitations is treated within the local density friction approximation (LDFA) [Juaristi et al., Phys. Rev. Lett. 100, 116102 (2008), 10.1103/PhysRevLett.100.116102], can provide an accurate description of both e-h pair and phonon excitations. In practice, its applicability becomes a complicated task in those situations of substantial surface atoms displacements because the LDFA requires the knowledge at each integration step of the bare surface electron density. In this work, we propose three different methods of calculating on-the-fly the electron density of the distorted surface and we discuss their suitability under typical surface distortions. The investigated methods are used in AIMDEF simulations for three illustrative adsorption cases, namely, dissociated H2 on Pd(100), N on Ag(111), and N2 on Fe(110). Our AIMDEF calculations performed with the three approaches highlight the importance of going beyond the frozen surface density to accurately describe the energy released into e-h pair excitations in case of large surface atom displacements.

  4. Effective and accurate approach for modeling of commensurate-incommensurate transition in krypton monolayer on graphite.

    PubMed

    Ustinov, E A

    2014-10-01

    Commensurate-incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs-Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton-graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton-carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas-solid and solid-solid system.

  5. Accurate cortical tissue classification on MRI by modeling cortical folding patterns.

    PubMed

    Kim, Hosung; Caldairou, Benoit; Hwang, Ji-Wook; Mansi, Tommaso; Hong, Seok-Jun; Bernasconi, Neda; Bernasconi, Andrea

    2015-09-01

    Accurate tissue classification is a crucial prerequisite to MRI morphometry. Automated methods based on intensity histograms constructed from the entire volume are challenged by regional intensity variations due to local radiofrequency artifacts as well as disparities in tissue composition, laminar architecture and folding patterns. Current work proposes a novel anatomy-driven method in which parcels conforming cortical folding were regionally extracted from the brain. Each parcel is subsequently classified using nonparametric mean shift clustering. Evaluation was carried out on manually labeled images from two datasets acquired at 3.0 Tesla (n = 15) and 1.5 Tesla (n = 20). In both datasets, we observed high tissue classification accuracy of the proposed method (Dice index >97.6% at 3.0 Tesla, and >89.2% at 1.5 Tesla). Moreover, our method consistently outperformed state-of-the-art classification routines available in SPM8 and FSL-FAST, as well as a recently proposed local classifier that partitions the brain into cubes. Contour-based analyses localized more accurate white matter-gray matter (GM) interface classification of the proposed framework compared to the other algorithms, particularly in central and occipital cortices that generally display bright GM due to their highly degree of myelination. Excellent accuracy was maintained, even in the absence of correction for intensity inhomogeneity. The presented anatomy-driven local classification algorithm may significantly improve cortical boundary definition, with possible benefits for morphometric inference and biomarker discovery.

  6. Effective and accurate approach for modeling of commensurate–incommensurate transition in krypton monolayer on graphite

    SciTech Connect

    Ustinov, E. A.

    2014-10-07

    Commensurate–incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs–Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton–graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton–carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas–solid and solid–solid system.

  7. Effective and accurate approach for modeling of commensurate-incommensurate transition in krypton monolayer on graphite.

    PubMed

    Ustinov, E A

    2014-10-01

    Commensurate-incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs-Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton-graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton-carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas-solid and solid-solid system. PMID:25296827

  8. Accurate cortical tissue classification on MRI by modeling cortical folding patterns.

    PubMed

    Kim, Hosung; Caldairou, Benoit; Hwang, Ji-Wook; Mansi, Tommaso; Hong, Seok-Jun; Bernasconi, Neda; Bernasconi, Andrea

    2015-09-01

    Accurate tissue classification is a crucial prerequisite to MRI morphometry. Automated methods based on intensity histograms constructed from the entire volume are challenged by regional intensity variations due to local radiofrequency artifacts as well as disparities in tissue composition, laminar architecture and folding patterns. Current work proposes a novel anatomy-driven method in which parcels conforming cortical folding were regionally extracted from the brain. Each parcel is subsequently classified using nonparametric mean shift clustering. Evaluation was carried out on manually labeled images from two datasets acquired at 3.0 Tesla (n = 15) and 1.5 Tesla (n = 20). In both datasets, we observed high tissue classification accuracy of the proposed method (Dice index >97.6% at 3.0 Tesla, and >89.2% at 1.5 Tesla). Moreover, our method consistently outperformed state-of-the-art classification routines available in SPM8 and FSL-FAST, as well as a recently proposed local classifier that partitions the brain into cubes. Contour-based analyses localized more accurate white matter-gray matter (GM) interface classification of the proposed framework compared to the other algorithms, particularly in central and occipital cortices that generally display bright GM due to their highly degree of myelination. Excellent accuracy was maintained, even in the absence of correction for intensity inhomogeneity. The presented anatomy-driven local classification algorithm may significantly improve cortical boundary definition, with possible benefits for morphometric inference and biomarker discovery. PMID:26037453

  9. Hierarchical Bayesian model updating for structural identification

    NASA Astrophysics Data System (ADS)

    Behmanesh, Iman; Moaveni, Babak; Lombaert, Geert; Papadimitriou, Costas

    2015-12-01

    A new probabilistic finite element (FE) model updating technique based on Hierarchical Bayesian modeling is proposed for identification of civil structural systems under changing ambient/environmental conditions. The performance of the proposed technique is investigated for (1) uncertainty quantification of model updating parameters, and (2) probabilistic damage identification of the structural systems. Accurate estimation of the uncertainty in modeling parameters such as mass or stiffness is a challenging task. Several Bayesian model updating frameworks have been proposed in the literature that can successfully provide the "parameter estimation uncertainty" of model parameters with the assumption that there is no underlying inherent variability in the updating parameters. However, this assumption may not be valid for civil structures where structural mass and stiffness have inherent variability due to different sources of uncertainty such as changing ambient temperature, temperature gradient, wind speed, and traffic loads. Hierarchical Bayesian model updating is capable of predicting the overall uncertainty/variability of updating parameters by assuming time-variability of the underlying linear system. A general solution based on Gibbs Sampler is proposed to estimate the joint probability distributions of the updating parameters. The performance of the proposed Hierarchical approach is evaluated numerically for uncertainty quantification and damage identification of a 3-story shear building model. Effects of modeling errors and incomplete modal data are considered in the numerical study.

  10. Accurate structure, thermodynamics and spectroscopy of medium-sized radicals by hybrid Coupled Cluster/Density Functional Theory approaches: the case of phenyl radical

    PubMed Central

    Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Egidi, Franco; Puzzarini, Cristina

    2015-01-01

    The CCSD(T) model coupled with extrapolation to the complete basis-set limit and additive approaches represents the “golden standard” for the structural and spectroscopic characterization of building blocks of biomolecules and nanosystems. However, when open-shell systems are considered, additional problems related to both specific computational difficulties and the need of obtaining spin-dependent properties appear. In this contribution, we present a comprehensive study of the molecular structure and spectroscopic (IR, Raman, EPR) properties of the phenyl radical with the aim of validating an accurate computational protocol able to deal with conjugated open-shell species. We succeeded in obtaining reliable and accurate results, thus confirming and, partly, extending the available experimental data. The main issue to be pointed out is the need of going beyond the CCSD(T) level by including a full treatment of triple excitations in order to fulfil the accuracy requirements. On the other hand, the reliability of density functional theory in properly treating open-shell systems has been further confirmed. PMID:23802956

  11. Structural model of uramarsite

    SciTech Connect

    Rastsvetaeva, R. K.; Sidorenko, G. A.; Ivanova, A. G.; Chukanov, N. V.

    2008-09-15

    The structural model of uramarsite, a new mineral of the uran-mica family from the Bota-Burum deposit (South Kazakhstan), is determined using a single-crystal X-ray diffraction analysis. The parameters of the triclinic unit cell are as follows: a = 7.173(2) A, b = 7.167(5) A, c = 9.30(1) A, {alpha} = 90.13(7){sup o}, {beta} = 90.09(4){sup o}, {gamma} = 89.96(4){sup o}, and space group P1. The crystal chemical formula of uramarsite is: (UO{sub 2}){sub 2}[AsO{sub 4}][PO{sub 4},AsO{sub 4}][NH{sub 4}][H{sub 3}O] . 6H{sub 2}O (Z = 1). Uramarsite is the second ammonium-containing mineral of uranium and an arsenate analogue of uramphite. In the case of uramarsite, the lowering of the symmetry from tetragonal to triclinic, which is accompanied by a triclinic distortion of the tetragonal unit cell, is apparently caused by the ordering of the As and P atoms and the NH{sub 4}, H{sub 3}O, and H{sub 2}O groups.

  12. Structural model of uramarsite

    NASA Astrophysics Data System (ADS)

    Rastsvetaeva, R. K.; Sidorenko, G. A.; Ivanova, A. G.; Chukanov, N. V.

    2008-09-01

    The structural model of uramarsite, a new mineral of the uran-mica family from the Bota-Burum deposit (South Kazakhstan), is determined using a single-crystal X-ray diffraction analysis. The parameters of the triclinic unit cell are as follows: a = 7.173(2) Å, b = 7.167(5) Å, c = 9.30(1) Å, α = 90.13(7)°, β = 90.09(4)°, γ = 89.96(4)°, and space group P1. The crystal chemical formula of uramarsite is: (UO2)2[AsO4][PO4,AsO4][NH4][H3O] · 6H2O ( Z = 1). Uramarsite is the second ammonium-containing mineral of uranium and an arsenate analogue of uramphite. In the case of uramarsite, the lowering of the symmetry from tetragonal to triclinic, which is accompanied by a triclinic distortion of the tetragonal unit cell, is apparently caused by the ordering of the As and P atoms and the NH4, H3O, and H2O groups.

  13. Structural Modeling Using "Scanning and Mapping" Technique

    NASA Technical Reports Server (NTRS)

    Amos, Courtney L.; Dash, Gerald S.; Shen, J. Y.; Ferguson, Frederick; Noga, Donald F. (Technical Monitor)

    2000-01-01

    Supported by NASA Glenn Center, we are in the process developing a structural damage diagnostic and monitoring system for rocket engines, which consists of five modules: Structural Modeling, Measurement Data Pre-Processor, Structural System Identification, Damage Detection Criterion, and Computer Visualization. The function of the system is to detect damage as it is incurred by the engine structures. The scientific principle to identify damage is to utilize the changes in the vibrational properties between the pre-damaged and post-damaged structures. The vibrational properties of the pre-damaged structure can be obtained based on an analytic computer model of the structure. Thus, as the first stage of the whole research plan, we currently focus on the first module - Structural Modeling. Three computer software packages are selected, and will be integrated for this purpose. They are PhotoModeler-Pro, AutoCAD-R14, and MSC/NASTRAN. AutoCAD is the most popular PC-CAD system currently available in the market. For our purpose, it plays like an interface to generate structural models of any particular engine parts or assembly, which is then passed to MSC/NASTRAN for extracting structural dynamic properties. Although AutoCAD is a powerful structural modeling tool, the complexity of engine components requires a further improvement in structural modeling techniques. We are working on a so-called "scanning and mapping" technique, which is a relatively new technique. The basic idea is to producing a full and accurate 3D structural model by tracing on multiple overlapping photographs taken from different angles. There is no need to input point positions, angles, distances or axes. Photographs can be taken by any types of cameras with different lenses. With the integration of such a modeling technique, the capability of structural modeling will be enhanced. The prototypes of any complex structural components will be produced by PhotoModeler first based on existing similar

  14. Evolutionary Triplet Models of Structured RNA

    PubMed Central

    Bradley, Robert K.; Holmes, Ian

    2009-01-01

    The reconstruction and synthesis of ancestral RNAs is a feasible goal for paleogenetics. This will require new bioinformatics methods, including a robust statistical framework for reconstructing histories of substitutions, indels and structural changes. We describe a “transducer composition” algorithm for extending pairwise probabilistic models of RNA structural evolution to models of multiple sequences related by a phylogenetic tree. This algorithm draws on formal models of computational linguistics as well as the 1985 protosequence algorithm of David Sankoff. The output of the composition algorithm is a multiple-sequence stochastic context-free grammar. We describe dynamic programming algorithms, which are robust to null cycles and empty bifurcations, for parsing this grammar. Example applications include structural alignment of non-coding RNAs, propagation of structural information from an experimentally-characterized sequence to its homologs, and inference of the ancestral structure of a set of diverged RNAs. We implemented the above algorithms for a simple model of pairwise RNA structural evolution; in particular, the algorithms for maximum likelihood (ML) alignment of three known RNA structures and a known phylogeny and inference of the common ancestral structure. We compared this ML algorithm to a variety of related, but simpler, techniques, including ML alignment algorithms for simpler models that omitted various aspects of the full model and also a posterior-decoding alignment algorithm for one of the simpler models. In our tests, incorporation of basepair structure was the most important factor for accurate alignment inference; appropriate use of posterior-decoding was next; and fine details of the model were least important. Posterior-decoding heuristics can be substantially faster than exact phylogenetic inference, so this motivates the use of sum-over-pairs heuristics where possible (and approximate sum-over-pairs). For more exact probabilistic

  15. Multi Sensor Data Integration for AN Accurate 3d Model Generation

    NASA Astrophysics Data System (ADS)

    Chhatkuli, S.; Satoh, T.; Tachibana, K.

    2015-05-01

    The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.

  16. How to Construct More Accurate Student Models: Comparing and Optimizing Knowledge Tracing and Performance Factor Analysis

    ERIC Educational Resources Information Center

    Gong, Yue; Beck, Joseph E.; Heffernan, Neil T.

    2011-01-01

    Student modeling is a fundamental concept applicable to a variety of intelligent tutoring systems (ITS). However, there is not a lot of practical guidance on how to construct and train such models. This paper compares two approaches for student modeling, Knowledge Tracing (KT) and Performance Factors Analysis (PFA), by evaluating their predictive…

  17. Accurate modeling and inversion of electrical resistivity data in the presence of metallic infrastructure with known location and dimension

    SciTech Connect

    Johnson, Timothy C.; Wellman, Dawn M.

    2015-06-26

    Electrical resistivity tomography (ERT) has been widely used in environmental applications to study processes associated with subsurface contaminants and contaminant remediation. Anthropogenic alterations in subsurface electrical conductivity associated with contamination often originate from highly industrialized areas with significant amounts of buried metallic infrastructure. The deleterious influence of such infrastructure on imaging results generally limits the utility of ERT where it might otherwise prove useful for subsurface investigation and monitoring. In this manuscript we present a method of accurately modeling the effects of buried conductive infrastructure within the forward modeling algorithm, thereby removing them from the inversion results. The method is implemented in parallel using immersed interface boundary conditions, whereby the global solution is reconstructed from a series of well-conditioned partial solutions. Forward modeling accuracy is demonstrated by comparison with analytic solutions. Synthetic imaging examples are used to investigate imaging capabilities within a subsurface containing electrically conductive buried tanks, transfer piping, and well casing, using both well casings and vertical electrode arrays as current sources and potential measurement electrodes. Results show that, although accurate infrastructure modeling removes the dominating influence of buried metallic features, the presence of metallic infrastructure degrades imaging resolution compared to standard ERT imaging. However, accurate imaging results may be obtained if electrodes are appropriately located.

  18. Accurate molecular structure and spectroscopic properties of nucleobases: a combined computational-microwave investigation of 2-thiouracil as a case study.

    PubMed

    Puzzarini, Cristina; Biczysko, Malgorzata; Barone, Vincenzo; Peña, Isabel; Cabezas, Carlos; Alonso, José L

    2013-10-21

    The computational composite scheme purposely set up for accurately describing the electronic structure and spectroscopic properties of small biomolecules has been applied to the first study of the rotational spectrum of 2-thiouracil. The experimental investigation was made possible thanks to the combination of the laser ablation technique with Fourier transform microwave spectrometers. The joint experimental-computational study allowed us to determine the accurate molecular structure and spectroscopic properties of the title molecule, but more importantly, it demonstrates a reliable approach for the accurate investigation of isolated small biomolecules.

  19. One-photon mass-analyzed threshold ionization (MATI) spectroscopy of pyridine: determination of accurate ionization energy and cationic structure.

    PubMed

    Lee, Yu Ran; Kang, Do Won; Kim, Hong Lae; Kwon, Chan Ho

    2014-11-01

    Ionization energies and cationic structures of pyridine were intensively investigated utilizing one-photon mass-analyzed threshold ionization (MATI) spectroscopy with vacuum ultraviolet radiation generated by four-wave difference frequency mixing in Kr. The present one-photon high-resolution MATI spectrum of pyridine demonstrated a much finer and richer vibrational structure than that of the previously reported two-photon MATI spectrum. From the MATI spectrum and photoionization efficiency curve, the accurate ionization energy of the ionic ground state of pyridine was confidently determined to be 73,570 ± 6 cm(-1) (9.1215 ± 0.0007 eV). The observed spectrum was almost completely assigned by utilizing Franck-Condon factors and vibrational frequencies calculated through adjustments of the geometrical parameters of cationic pyridine at the B3LYP/cc-pVTZ level. A unique feature unveiled through rigorous analysis was the prominent progression of the 10 vibrational mode, which corresponds to in-plane ring bending, and the combination of other totally symmetric fundamentals with the ring bending overtones, which contribute to the geometrical change upon ionization. Notably, the remaining peaks originate from the upper electronic state ((2)A2), as predicted by high-resolution photoelectron spectroscopy studies and symmetry-adapted cluster configuration interaction calculations. Based on the quantitatively good agreement between the experimental and calculated results, it was concluded that upon ionization the pyridine cation in the ground electronic state should have a planar structure of C(2v) symmetry through the C-N axis.

  20. One-photon mass-analyzed threshold ionization (MATI) spectroscopy of pyridine: Determination of accurate ionization energy and cationic structure

    SciTech Connect

    Lee, Yu Ran; Kang, Do Won; Kim, Hong Lae E-mail: hlkim@kangwon.ac.kr; Kwon, Chan Ho E-mail: hlkim@kangwon.ac.kr

    2014-11-07

    Ionization energies and cationic structures of pyridine were intensively investigated utilizing one-photon mass-analyzed threshold ionization (MATI) spectroscopy with vacuum ultraviolet radiation generated by four-wave difference frequency mixing in Kr. The present one-photon high-resolution MATI spectrum of pyridine demonstrated a much finer and richer vibrational structure than that of the previously reported two-photon MATI spectrum. From the MATI spectrum and photoionization efficiency curve, the accurate ionization energy of the ionic ground state of pyridine was confidently determined to be 73 570 ± 6 cm{sup −1} (9.1215 ± 0.0007 eV). The observed spectrum was almost completely assigned by utilizing Franck-Condon factors and vibrational frequencies calculated through adjustments of the geometrical parameters of cationic pyridine at the B3LYP/cc-pVTZ level. A unique feature unveiled through rigorous analysis was the prominent progression of the 10 vibrational mode, which corresponds to in-plane ring bending, and the combination of other totally symmetric fundamentals with the ring bending overtones, which contribute to the geometrical change upon ionization. Notably, the remaining peaks originate from the upper electronic state ({sup 2}A{sub 2}), as predicted by high-resolution photoelectron spectroscopy studies and symmetry-adapted cluster configuration interaction calculations. Based on the quantitatively good agreement between the experimental and calculated results, it was concluded that upon ionization the pyridine cation in the ground electronic state should have a planar structure of C{sub 2v} symmetry through the C-N axis.

  1. 3D Modeling of Branching Structures for Anatomical Instruction

    PubMed Central

    Mattingly, William A.; Chariker, Julia H.; Paris, Richard; Chang, Dar-jen; Pani, John R.

    2015-01-01

    Branching tubular structures are prevalent in many different organic and synthetic settings. From trees and vegetation in nature, to vascular structures throughout human and animal biology, these structures are always candidates for new methods of graphical and visual expression. We present a modeling tool for the creation and interactive modification of these structures. Parameters such as thickness and position of branching structures can be modified, while geometric constraints ensure that the resulting mesh will have an accurate anatomical structure by not having inconsistent geometry. We apply this method to the creation of accurate representations of the different types of retinal cells in the human eye. This method allows a user to quickly produce anatomically accurate structures with low polygon counts that are suitable for rendering at interactive rates on commodity computers and mobile devices. PMID:27087764

  2. Structural dynamics system model reduction

    NASA Technical Reports Server (NTRS)

    Chen, J. C.; Rose, T. L.; Wada, B. K.

    1987-01-01

    Loads analysis for structural dynamic systems is usually performed by finite element models. Because of the complexity of the structural system, the model contains large number of degree-of-freedom. The large model is necessary since details of the stress, loads and responses due to mission environments are computed. However, a simplified model is needed for other tasks such as pre-test analysis for modal testing, and control-structural interaction studies. A systematic method of model reduction for modal test analysis is presented. Perhaps it will be of some help in developing a simplified model for the control studies.

  3. Active appearance model and deep learning for more accurate prostate segmentation on MRI

    NASA Astrophysics Data System (ADS)

    Cheng, Ruida; Roth, Holger R.; Lu, Le; Wang, Shijun; Turkbey, Baris; Gandler, William; McCreedy, Evan S.; Agarwal, Harsh K.; Choyke, Peter; Summers, Ronald M.; McAuliffe, Matthew J.

    2016-03-01

    Prostate segmentation on 3D MR images is a challenging task due to image artifacts, large inter-patient prostate shape and texture variability, and lack of a clear prostate boundary specifically at apex and base levels. We propose a supervised machine learning model that combines atlas based Active Appearance Model (AAM) with a Deep Learning model to segment the prostate on MR images. The performance of the segmentation method is evaluated on 20 unseen MR image datasets. The proposed method combining AAM and Deep Learning achieves a mean Dice Similarity Coefficient (DSC) of 0.925 for whole 3D MR images of the prostate using axial cross-sections. The proposed model utilizes the adaptive atlas-based AAM model and Deep Learning to achieve significant segmentation accuracy.

  4. Accurate calculation of binding energies for molecular clusters - Assessment of different models

    NASA Astrophysics Data System (ADS)

    Friedrich, Joachim; Fiedler, Benjamin

    2016-06-01

    In this work we test different strategies to compute high-level benchmark energies for medium-sized molecular clusters. We use the incremental scheme to obtain CCSD(T)/CBS energies for our test set and carefully validate the accuracy for binding energies by statistical measures. The local errors of the incremental scheme are <1 kJ/mol. Since they are smaller than the basis set errors, we obtain higher total accuracy due to the applicability of larger basis sets. The final CCSD(T)/CBS benchmark values are ΔE = - 278.01 kJ/mol for (H2O)10, ΔE = - 221.64 kJ/mol for (HF)10, ΔE = - 45.63 kJ/mol for (CH4)10, ΔE = - 19.52 kJ/mol for (H2)20 and ΔE = - 7.38 kJ/mol for (H2)10 . Furthermore we test state-of-the-art wave-function-based and DFT methods. Our benchmark data will be very useful for critical validations of new methods. We find focal-point-methods for estimating CCSD(T)/CBS energies to be highly accurate and efficient. For foQ-i3CCSD(T)-MP2/TZ we get a mean error of 0.34 kJ/mol and a standard deviation of 0.39 kJ/mol.

  5. Accurate characterization and modeling of transmission lines for GaAs MMIC's

    NASA Astrophysics Data System (ADS)

    Finlay, Hugh J.; Jansen, Rolf H.; Jenkins, John A.; Eddison, Ian G.

    1988-06-01

    The authors discuss computer-aided design (CAD) tools together with high-accuracy microwave measurements to realize improved design data for GaAs monolithic microwave integrated circuits (MMICs). In particular, a combined theoretical and experimental approach to the generation of an accurate design database for transmission lines on GaAs MMICs is presented. The theoretical approach is based on an improved transmission-line theory which is part of the spectral-domain hybrid-mode computer program MCLINE. The benefit of this approach in the design of multidielectric-media transmission lines is described. The program was designed to include loss mechanisms in all dielectric layers and to include conductor and surface roughness loss contributions. As an example, using GaAs ring resonator techniques covering 2 to 24 GHz, accuracies in effective dielectric constant and loss of 1 percent and 15 percent respectively, are presented. By combining theoretical and experimental techniques, a generalized MMIC microstrip design database is outlined.

  6. Comparative Protein Structure Modeling Using MODELLER.

    PubMed

    Webb, Benjamin; Sali, Andrej

    2016-01-01

    Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. © 2016 by John Wiley & Sons, Inc. PMID:27322406

  7. Comparative Protein Structure Modeling Using MODELLER.

    PubMed

    Webb, Benjamin; Sali, Andrej

    2016-06-20

    Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. © 2016 by John Wiley & Sons, Inc.

  8. Selective 2′-hydroxyl acylation analyzed by primer extension and mutational profiling (SHAPE-MaP) for direct, versatile, and accurate RNA structure analysis

    PubMed Central

    Smola, Matthew J.; Rice, Greggory M.; Busan, Steven; Siegfried, Nathan A.; Weeks, Kevin M.

    2016-01-01

    SHAPE chemistries exploit small electrophilic reagents that react with the 2′-hydroxyl group to interrogate RNA structure at single-nucleotide resolution. Mutational profiling (MaP) identifies modified residues based on the ability of reverse transcriptase to misread a SHAPE-modified nucleotide and then counting the resulting mutations by massively parallel sequencing. The SHAPE-MaP approach measures the structure of large and transcriptome-wide systems as accurately as for simple model RNAs. This protocol describes the experimental steps, implemented over three days, required to perform SHAPE probing and construct multiplexed SHAPE-MaP libraries suitable for deep sequencing. These steps include RNA folding and SHAPE structure probing, mutational profiling by reverse transcription, library construction, and sequencing. Automated processing of MaP sequencing data is accomplished using two software packages. ShapeMapper converts raw sequencing files into mutational profiles, creates SHAPE reactivity plots, and provides useful troubleshooting information, often within an hour. SuperFold uses these data to model RNA secondary structures, identify regions with well-defined structures, and visualize probable and alternative helices, often in under a day. We illustrate these algorithms with the E. coli thiamine pyrophosphate riboswitch, E. coli 16S rRNA, and HIV-1 genomic RNAs. SHAPE-MaP can be used to make nucleotide-resolution biophysical measurements of individual RNA motifs, rare components of complex RNA ensembles, and entire transcriptomes. The straightforward MaP strategy greatly expands the number, length, and complexity of analyzable RNA structures. PMID:26426499

  9. Simplified versus geometrically accurate models of forefoot anatomy to predict plantar pressures: A finite element study.

    PubMed

    Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R

    2016-01-25

    Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in <1h compared to >3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required.

  10. A Simple, Accurate Model for Alkyl Adsorption on Late Transition Metals

    SciTech Connect

    Montemore, Matthew M.; Medlin, James W.

    2013-01-18

    A simple model that predicts the adsorption energy of an arbitrary alkyl in the high-symmetry sites of late transition metal fcc(111) and related surfaces is presented. The model makes predictions based on a few simple attributes of the adsorbate and surface, including the d-shell filling and the matrix coupling element, as well as the adsorption energy of methyl in the top sites. We use the model to screen surfaces for alkyl chain-growth properties and to explain trends in alkyl adsorption strength, site preference, and vibrational softening.

  11. Generalized Stoner-Wohlfarth model accurately describing the switching processes in pseudo-single ferromagnetic particles

    SciTech Connect

    Cimpoesu, Dorin Stoleriu, Laurentiu; Stancu, Alexandru

    2013-12-14

    We propose a generalized Stoner-Wohlfarth (SW) type model to describe various experimentally observed angular dependencies of the switching field in non-single-domain magnetic particles. Because the nonuniform magnetic states are generally characterized by complicated spin configurations with no simple analytical description, we maintain the macrospin hypothesis and we phenomenologically include the effects of nonuniformities only in the anisotropy energy, preserving as much as possible the elegance of SW model, the concept of critical curve and its geometric interpretation. We compare the results obtained with our model with full micromagnetic simulations in order to evaluate the performance and limits of our approach.

  12. Empirical approaches to more accurately predict benthic-pelagic coupling in biogeochemical ocean models

    NASA Astrophysics Data System (ADS)

    Dale, Andy; Stolpovsky, Konstantin; Wallmann, Klaus

    2016-04-01

    The recycling and burial of biogenic material in the sea floor plays a key role in the regulation of ocean chemistry. Proper consideration of these processes in ocean biogeochemical models is becoming increasingly recognized as an important step in model validation and prediction. However, the rate of organic matter remineralization in sediments and the benthic flux of redox-sensitive elements are difficult to predict a priori. In this communication, examples of empirical benthic flux models that can be coupled to earth system models to predict sediment-water exchange in the open ocean are presented. Large uncertainties hindering further progress in this field include knowledge of the reactivity of organic carbon reaching the sediment, the importance of episodic variability in bottom water chemistry and particle rain rates (for both the deep-sea and margins) and the role of benthic fauna. How do we meet the challenge?

  13. A model for the accurate computation of the lateral scattering of protons in water.

    PubMed

    Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T

    2016-02-21

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.

  14. A model for the accurate computation of the lateral scattering of protons in water

    NASA Astrophysics Data System (ADS)

    Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.

    2016-02-01

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.

  15. Dynamic saturation in Semiconductor Optical Amplifiers: accurate model, role of carrier density, and slow light.

    PubMed

    Berger, Perrine; Alouini, Mehdi; Bourderionnet, Jérôme; Bretenaker, Fabien; Dolfi, Daniel

    2010-01-18

    We developed an improved model in order to predict the RF behavior and the slow light properties of the SOA valid for any experimental conditions. It takes into account the dynamic saturation of the SOA, which can be fully characterized by a simple measurement, and only relies on material fitting parameters, independent of the optical intensity and the injected current. The present model is validated by showing a good agreement with experiments for small and large modulation indices.

  16. Making it Easy to Construct Accurate Hydrological Models that Exploit High Performance Computers (Invited)

    NASA Astrophysics Data System (ADS)

    Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.

    2013-12-01

    This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.

  17. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners

    PubMed Central

    Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen

    2016-01-01

    Exterior orientation parameters’ (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang’E-1, compared to the existing space resection model. PMID:27077855

  18. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners.

    PubMed

    Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen

    2016-04-11

    Exterior orientation parameters' (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang'E-1, compared to the existing space resection model.

  19. Oscillating water column structural model

    SciTech Connect

    Copeland, Guild; Bull, Diana L; Jepsen, Richard Alan; Gordon, Margaret Ellen

    2014-09-01

    An oscillating water column (OWC) wave energy converter is a structure with an opening to the ocean below the free surface, i.e. a structure with a moonpool. Two structural models for a non-axisymmetric terminator design OWC, the Backward Bent Duct Buoy (BBDB) are discussed in this report. The results of this structural model design study are intended to inform experiments and modeling underway in support of the U.S. Department of Energy (DOE) initiated Reference Model Project (RMP). A detailed design developed by Re Vision Consulting used stiffeners and girders to stabilize the structure against the hydrostatic loads experienced by a BBDB device. Additional support plates were added to this structure to account for loads arising from the mooring line attachment points. A simplified structure was designed in a modular fashion. This simplified design allows easy alterations to the buoyancy chambers and uncomplicated analysis of resulting changes in buoyancy.

  20. Use of human in vitro skin models for accurate and ethical risk assessment: metabolic considerations.

    PubMed

    Hewitt, Nicola J; Edwards, Robert J; Fritsche, Ellen; Goebel, Carsten; Aeby, Pierre; Scheel, Julia; Reisinger, Kerstin; Ouédraogo, Gladys; Duche, Daniel; Eilstein, Joan; Latil, Alain; Kenny, Julia; Moore, Claire; Kuehnl, Jochen; Barroso, Joao; Fautz, Rolf; Pfuhler, Stefan

    2013-06-01

    Several human skin models employing primary cells and immortalized cell lines used as monocultures or combined to produce reconstituted 3D skin constructs have been developed. Furthermore, these models have been included in European genotoxicity and sensitization/irritation assay validation projects. In order to help interpret data, Cosmetics Europe (formerly COLIPA) facilitated research projects that measured a variety of defined phase I and II enzyme activities and created a complete proteomic profile of xenobiotic metabolizing enzymes (XMEs) in native human skin and compared them with data obtained from a number of in vitro models of human skin. Here, we have summarized our findings on the current knowledge of the metabolic capacity of native human skin and in vitro models and made an overall assessment of the metabolic capacity from gene expression, proteomic expression, and substrate metabolism data. The known low expression and function of phase I enzymes in native whole skin were reflected in the in vitro models. Some XMEs in whole skin were not detected in in vitro models and vice versa, and some major hepatic XMEs such as cytochrome P450-monooxygenases were absent or measured only at very low levels in the skin. Conversely, despite varying mRNA and protein levels of phase II enzymes, functional activity of glutathione S-transferases, N-acetyltransferase 1, and UDP-glucuronosyltransferases were all readily measurable in whole skin and in vitro skin models at activity levels similar to those measured in the liver. These projects have enabled a better understanding of the contribution of XMEs to toxicity endpoints. PMID:23539547

  1. Toward Accurate Modeling of the Effect of Ion-Pair Formation on Solute Redox Potential.

    PubMed

    Qu, Xiaohui; Persson, Kristin A

    2016-09-13

    A scheme to model the dependence of a solute redox potential on the supporting electrolyte is proposed, and the results are compared to experimental observations and other reported theoretical models. An improved agreement with experiment is exhibited if the effect of the supporting electrolyte on the redox potential is modeled through a concentration change induced via ion pair formation with the salt, rather than by only considering the direct impact on the redox potential of the solute itself. To exemplify the approach, the scheme is applied to the concentration-dependent redox potential of select molecules proposed for nonaqueous flow batteries. However, the methodology is general and enables rational computational electrolyte design through tuning of the operating window of electrochemical systems by shifting the redox potential of its solutes; including potentially both salts as well as redox active molecules. PMID:27500744

  2. Geo-accurate model extraction from three-dimensional image-derived point clouds

    NASA Astrophysics Data System (ADS)

    Nilosek, David; Sun, Shaohui; Salvaggio, Carl

    2012-06-01

    A methodology is proposed for automatically extracting primitive models of buildings in a scene from a three-dimensional point cloud derived from multi-view depth extraction techniques. By exploring the information provided by the two-dimensional images and the three-dimensional point cloud and the relationship between the two, automated methods for extraction are presented. Using the inertial measurement unit (IMU) and global positioning system (GPS) data that accompanies the aerial imagery, the geometry is derived in a world-coordinate system so the model can be used with GIS software. This work uses imagery collected by the Rochester Institute of Technology's Digital Imaging and Remote Sensing Laboratory's WASP sensor platform. The data used was collected over downtown Rochester, New York. Multiple target buildings have their primitive three-dimensional model geometry extracted using modern point-cloud processing techniques.

  3. Lateral impact validation of a geometrically accurate full body finite element model for blunt injury prediction.

    PubMed

    Vavalle, Nicholas A; Moreno, Daniel P; Rhyne, Ashley C; Stitzel, Joel D; Gayzik, F Scott

    2013-03-01

    This study presents four validation cases of a mid-sized male (M50) full human body finite element model-two lateral sled tests at 6.7 m/s, one sled test at 8.9 m/s, and a lateral drop test. Model results were compared to transient force curves, peak force, chest compression, and number of fractures from the studies. For one of the 6.7 m/s impacts (flat wall impact), the peak thoracic, abdominal and pelvic loads were 8.7, 3.1 and 14.9 kN for the model and 5.2 ± 1.1 kN, 3.1 ± 1.1 kN, and 6.3 ± 2.3 kN for the tests. For the same test setup in the 8.9 m/s case, they were 12.6, 6, and 21.9 kN for the model and 9.1 ± 1.5 kN, 4.9 ± 1.1 kN, and 17.4 ± 6.8 kN for the experiments. The combined torso load and the pelvis load simulated in a second rigid wall impact at 6.7 m/s were 11.4 and 15.6 kN, respectively, compared to 8.5 ± 0.2 kN and 8.3 ± 1.8 kN experimentally. The peak thorax load in the drop test was 6.7 kN for the model, within the range in the cadavers, 5.8-7.4 kN. When analyzing rib fractures, the model predicted Abbreviated Injury Scale scores within the reported range in three of four cases. Objective comparison methods were used to quantitatively compare the model results to the literature studies. The results show a good match in the thorax and abdomen regions while the pelvis results over predicted the reaction loads from the literature studies. These results are an important milestone in the development and validation of this globally developed average male FEA model in lateral impact.

  4. Rapid Bayesian point source inversion using pattern recognition --- bridging the gap between regional scaling relations and accurate physical modelling

    NASA Astrophysics Data System (ADS)

    Valentine, A. P.; Kaeufl, P.; De Wit, R. W. L.; Trampert, J.

    2014-12-01

    Obtaining knowledge about source parameters in (near) real-time during or shortly after an earthquake is essential for mitigating damage and directing resources in the aftermath of the event. Therefore, a variety of real-time source-inversion algorithms have been developed over recent decades. This has been driven by the ever-growing availability of dense seismograph networks in many seismogenic areas of the world and the significant advances in real-time telemetry. By definition, these algorithms rely on short time-windows of sparse, local and regional observations, resulting in source estimates that are highly sensitive to observational errors, noise and missing data. In order to obtain estimates more rapidly, many algorithms are either entirely based on empirical scaling relations or make simplifying assumptions about the Earth's structure, which can in turn lead to biased results. It is therefore essential that realistic uncertainty bounds are estimated along with the parameters. A natural means of propagating probabilistic information on source parameters through the entire processing chain from first observations to potential end users and decision makers is provided by the Bayesian formalism.We present a novel method based on pattern recognition allowing us to incorporate highly accurate physical modelling into an uncertainty-aware real-time inversion algorithm. The algorithm is based on a pre-computed Green's functions database, containing a large set of source-receiver paths in a highly heterogeneous crustal model. Unlike similar methods, which often employ a grid search, we use a supervised learning algorithm to relate synthetic waveforms to point source parameters. This training procedure has to be performed only once and leads to a representation of the posterior probability density function p(m|d) --- the distribution of source parameters m given observations d --- which can be evaluated quickly for new data.Owing to the flexibility of the pattern

  5. Numerical Computation of a Continuous-thrust State Transition Matrix Incorporating Accurate Hardware and Ephemeris Models

    NASA Technical Reports Server (NTRS)

    Ellison, Donald; Conway, Bruce; Englander, Jacob

    2015-01-01

    A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.

  6. Developing an Accurate CFD Based Gust Model for the Truss Braced Wing Aircraft

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2013-01-01

    The increased flexibility of long endurance aircraft having high aspect ratio wings necessitates attention to gust response and perhaps the incorporation of gust load alleviation. The design of civil transport aircraft with a strut or truss-braced high aspect ratio wing furthermore requires gust response analysis in the transonic cruise range. This requirement motivates the use of high fidelity nonlinear computational fluid dynamics (CFD) for gust response analysis. This paper presents the development of a CFD based gust model for the truss braced wing aircraft. A sharp-edged gust provides the gust system identification. The result of the system identification is several thousand time steps of instantaneous pressure coefficients over the entire vehicle. This data is filtered and downsampled to provide the snapshot data set from which a reduced order model is developed. A stochastic singular value decomposition algorithm is used to obtain a proper orthogonal decomposition (POD). The POD model is combined with a convolution integral to predict the time varying pressure coefficient distribution due to a novel gust profile. Finally the unsteady surface pressure response of the truss braced wing vehicle to a one-minus-cosine gust, simulated using the reduced order model, is compared with the full CFD.

  7. The Effects of Video Modeling with Voiceover Instruction on Accurate Implementation of Discrete-Trial Instruction

    ERIC Educational Resources Information Center

    Vladescu, Jason C.; Carroll, Regina; Paden, Amber; Kodak, Tiffany M.

    2012-01-01

    The present study replicates and extends previous research on the use of video modeling (VM) with voiceover instruction to train staff to implement discrete-trial instruction (DTI). After staff trainees reached the mastery criterion when teaching an adult confederate with VM, they taught a child with a developmental disability using DTI. The…

  8. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions.

    PubMed

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes the physiological interpretation of higher order kernels easier. Furthermore, simulation results show better performance of the proposed approach in estimating the system dynamics than LEK in certain cases, and it remains effective in the presence of significant additive measurement noise. PMID:9236985

  9. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    SciTech Connect

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  10. Statistically accurate low-order models for uncertainty quantification in turbulent dynamical systems

    PubMed Central

    Sapsis, Themistoklis P.; Majda, Andrew J.

    2013-01-01

    A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra. PMID:23918398

  11. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    PubMed Central

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-01-01

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  12. Are satellite based rainfall estimates accurate enough for crop modelling under Sahelian climate?

    NASA Astrophysics Data System (ADS)

    Ramarohetra, J.; Sultan, B.

    2012-04-01

    Agriculture is considered as the most climate dependant human activity. In West Africa and especially in the sudano-sahelian zone, rain-fed agriculture - that represents 93% of cultivated areas and is the means of support of 70% of the active population - is highly vulnerable to precipitation variability. To better understand and anticipate climate impacts on agriculture, crop models - that estimate crop yield from climate information (e.g rainfall, temperature, insolation, humidity) - have been developed. These crop models are useful (i) in ex ante analysis to quantify the impact of different strategies implementation - crop management (e.g. choice of varieties, sowing date), crop insurance or medium-range weather forecast - on yields, (ii) for early warning systems and to (iii) assess future food security. Yet, the successful application of these models depends on the accuracy of their climatic drivers. In the sudano-sahelian zone , the quality of precipitation estimations is then a key factor to understand and anticipate climate impacts on agriculture via crop modelling and yield estimations. Different kinds of precipitation estimations can be used. Ground measurements have long-time series but an insufficient network density, a large proportion of missing values, delay in reporting time, and they have limited availability. An answer to these shortcomings may lie in the field of remote sensing that provides satellite-based precipitation estimations. However, satellite-based rainfall estimates (SRFE) are not a direct measurement but rather an estimation of precipitation. Used as an input for crop models, it determines the performance of the simulated yield, hence SRFE require validation. The SARRAH crop model is used to model three different varieties of pearl millet (HKP, MTDO, Souna3) in a square degree centred on 13.5°N and 2.5°E, in Niger. Eight satellite-based rainfall daily products (PERSIANN, CMORPH, TRMM 3b42-RT, GSMAP MKV+, GPCP, TRMM 3b42v6, RFEv2 and

  13. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    SciTech Connect

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-11-15

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  14. Fast and accurate low-dimensional reduction of biophysically detailed neuron models.

    PubMed

    Marasco, Addolorata; Limongiello, Alessandro; Migliore, Michele

    2012-01-01

    Realistic modeling of neurons are quite successful in complementing traditional experimental techniques. However, their networks require a computational power beyond the capabilities of current supercomputers, and the methods used so far to reduce their complexity do not take into account the key features of the cells nor critical physiological properties. Here we introduce a new, automatic and fast method to map realistic neurons into equivalent reduced models running up to > 40 times faster while maintaining a very high accuracy of the membrane potential dynamics during synaptic inputs, and a direct link with experimental observables. The mapping of arbitrary sets of synaptic inputs, without additional fine tuning, would also allow the convenient and efficient implementation of a new generation of large-scale simulations of brain regions reproducing the biological variability observed in real neurons, with unprecedented advances to understand higher brain functions. PMID:23226594

  15. Fast and accurate low-dimensional reduction of biophysically detailed neuron models.

    PubMed

    Marasco, Addolorata; Limongiello, Alessandro; Migliore, Michele

    2012-01-01

    Realistic modeling of neurons are quite successful in complementing traditional experimental techniques. However, their networks require a computational power beyond the capabilities of current supercomputers, and the methods used so far to reduce their complexity do not take into account the key features of the cells nor critical physiological properties. Here we introduce a new, automatic and fast method to map realistic neurons into equivalent reduced models running up to > 40 times faster while maintaining a very high accuracy of the membrane potential dynamics during synaptic inputs, and a direct link with experimental observables. The mapping of arbitrary sets of synaptic inputs, without additional fine tuning, would also allow the convenient and efficient implementation of a new generation of large-scale simulations of brain regions reproducing the biological variability observed in real neurons, with unprecedented advances to understand higher brain functions.

  16. An accurate two-phase approximate solution to the acute viral infection model

    SciTech Connect

    Perelson, Alan S

    2009-01-01

    During an acute viral infection, virus levels rise, reach a peak and then decline. Data and numerical solutions suggest the growth and decay phases are linear on a log scale. While viral dynamic models are typically nonlinear with analytical solutions difficult to obtain, the exponential nature of the solutions suggests approximations can be found. We derive a two-phase approximate solution to the target cell limited influenza model and illustrate the accuracy using data and previously established parameter values of six patients infected with influenza A. For one patient, the subsequent fall in virus concentration was not consistent with our predictions during the decay phase and an alternate approximation is derived. We find expressions for the rate and length of initial viral growth in terms of the parameters, the extent each parameter is involved in viral peaks, and the single parameter responsible for virus decay. We discuss applications of this analysis in antiviral treatments and investigating host and virus heterogeneities.

  17. Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data

    PubMed Central

    Pagán, Josué; Irene De Orbe, M.; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L.; Vivancos Mora, J.; Moya, José M.; Ayala, José L.

    2015-01-01

    Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103

  18. Simulating Dissolution of Intravitreal Triamcinolone Acetonide Suspensions in an Anatomically Accurate Rabbit Eye Model

    PubMed Central

    Horner, Marc; Muralikrishnan, R.

    2010-01-01

    ABSTRACT Purpose A computational fluid dynamics (CFD) study examined the impact of particle size on dissolution rate and residence of intravitreal suspension depots of Triamcinolone Acetonide (TAC). Methods A model for the rabbit eye was constructed using insights from high-resolution NMR imaging studies (Sawada 2002). The current model was compared to other published simulations in its ability to predict clearance of various intravitreally injected materials. Suspension depots were constructed explicitly rendering individual particles in various configurations: 4 or 16 mg drug confined to a 100 μL spherical depot, or 4 mg exploded to fill the entire vitreous. Particle size was reduced systematically in each configuration. The convective diffusion/dissolution process was simulated using a multiphase model. Results Release rate became independent of particle diameter below a certain value. The size-independent limits occurred for particle diameters ranging from 77 to 428 μM depending upon the depot configuration. Residence time predicted for the spherical depots in the size-independent limit was comparable to that observed in vivo. Conclusions Since the size-independent limit was several-fold greater than the particle size of commercially available pharmaceutical TAC suspensions, differences in particle size amongst such products are predicted to be immaterial to their duration or performance. PMID:20467888

  19. Mathematical model accurately predicts protein release from an affinity-based delivery system.

    PubMed

    Vulic, Katarina; Pakulska, Malgosia M; Sonthalia, Rohit; Ramachandran, Arun; Shoichet, Molly S

    2015-01-10

    Affinity-based controlled release modulates the delivery of protein or small molecule therapeutics through transient dissociation/association. To understand which parameters can be used to tune release, we used a mathematical model based on simple binding kinetics. A comprehensive asymptotic analysis revealed three characteristic regimes for therapeutic release from affinity-based systems. These regimes can be controlled by diffusion or unbinding kinetics, and can exhibit release over either a single stage or two stages. This analysis fundamentally changes the way we think of controlling release from affinity-based systems and thereby explains some of the discrepancies in the literature on which parameters influence affinity-based release. The rate of protein release from affinity-based systems is determined by the balance of diffusion of the therapeutic agent through the hydrogel and the dissociation kinetics of the affinity pair. Equations for tuning protein release rate by altering the strength (KD) of the affinity interaction, the concentration of binding ligand in the system, the rate of dissociation (koff) of the complex, and the hydrogel size and geometry, are provided. We validated our model by collapsing the model simulations and the experimental data from a recently described affinity release system, to a single master curve. Importantly, this mathematical analysis can be applied to any single species affinity-based system to determine the parameters required for a desired release profile. PMID:25449806

  20. Towards a More Accurate Solar Power Forecast By Improving NWP Model Physics

    NASA Astrophysics Data System (ADS)

    Köhler, C.; Lee, D.; Steiner, A.; Ritter, B.

    2014-12-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the uncertainties associated with the large share of weather-dependent power sources. Precise power forecast, well-timed energy trading on the stock market, and electrical grid stability can be maintained. The research project EWeLiNE is a collaboration of the German Weather Service (DWD), the Fraunhofer Institute (IWES) and three German transmission system operators (TSOs). Together, wind and photovoltaic (PV) power forecasts shall be improved by combining optimized NWP and enhanced power forecast models. The conducted work focuses on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. Not only the representation of the model cloud characteristics, but also special events like Sahara dust over Germany and the solar eclipse in 2015 are treated and their effect on solar power accounted for. An overview of the EWeLiNE project and results of the ongoing research will be presented.

  1. SPAR Model Structural Efficiencies

    SciTech Connect

    John Schroeder; Dan Henry

    2013-04-01

    The Nuclear Regulatory Commission (NRC) and the Electric Power Research Institute (EPRI) are supporting initiatives aimed at improving the quality of probabilistic risk assessments (PRAs). Included in these initiatives are the resolution of key technical issues that are have been judged to have the most significant influence on the baseline core damage frequency of the NRC’s Standardized Plant Analysis Risk (SPAR) models and licensee PRA models. Previous work addressed issues associated with support system initiating event analysis and loss of off-site power/station blackout analysis. The key technical issues were: • Development of a standard methodology and implementation of support system initiating events • Treatment of loss of offsite power • Development of standard approach for emergency core cooling following containment failure Some of the related issues were not fully resolved. This project continues the effort to resolve outstanding issues. The work scope was intended to include substantial collaboration with EPRI; however, EPRI has had other higher priority initiatives to support. Therefore this project has addressed SPAR modeling issues. The issues addressed are • SPAR model transparency • Common cause failure modeling deficiencies and approaches • Ac and dc modeling deficiencies and approaches • Instrumentation and control system modeling deficiencies and approaches

  2. Effects of the inlet conditions and blood models on accurate prediction of hemodynamics in the stented coronary arteries

    NASA Astrophysics Data System (ADS)

    Jiang, Yongfei; Zhang, Jun; Zhao, Wanhua

    2015-05-01

    Hemodynamics altered by stent implantation is well-known to be closely related to in-stent restenosis. Computational fluid dynamics (CFD) method has been used to investigate the hemodynamics in stented arteries in detail and help to analyze the performances of stents. In this study, blood models with Newtonian or non-Newtonian properties were numerically investigated for the hemodynamics at steady or pulsatile inlet conditions respectively employing CFD based on the finite volume method. The results showed that the blood model with non-Newtonian property decreased the area of low wall shear stress (WSS) compared with the blood model with Newtonian property and the magnitude of WSS varied with the magnitude and waveform of the inlet velocity. The study indicates that the inlet conditions and blood models are all important for accurately predicting the hemodynamics. This will be beneficial to estimate the performances of stents and also help clinicians to select the proper stents for the patients.

  3. A Simple Iterative Model Accurately Captures Complex Trapline Formation by Bumblebees Across Spatial Scales and Flower Arrangements

    PubMed Central

    Reynolds, Andrew M.; Lihoreau, Mathieu; Chittka, Lars

    2013-01-01

    Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments. PMID:23505353

  4. A simple iterative model accurately captures complex trapline formation by bumblebees across spatial scales and flower arrangements.

    PubMed

    Reynolds, Andrew M; Lihoreau, Mathieu; Chittka, Lars

    2013-01-01

    Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments.

  5. Development of a Godunov-type model for the accurate simulation of dispersion dominated waves

    NASA Astrophysics Data System (ADS)

    Bradford, Scott F.

    2016-10-01

    A new numerical model based on the Navier-Stokes equations is presented for the simulation of dispersion dominated waves. The equations are solved by splitting the pressure into hydrostatic and non-hydrostatic components. The Godunov approach is utilized to solve the hydrostatic flow equations and the resulting velocity field is then corrected to be divergence free. Alternative techniques for the time integration of the non-hydrostatic pressure gradients are presented and investigated in order to improve the accuracy of dispersion dominated wave simulations. Numerical predictions are compared with analytical solutions and experimental data for test cases involving standing, shoaling, refracting, and breaking waves.

  6. Considering mask pellicle effect for more accurate OPC model at 45nm technology node

    NASA Astrophysics Data System (ADS)

    Wang, Ching-Heng; Liu, Qingwei; Zhang, Liguo

    2008-11-01

    Now it comes to the 45nm technology node, which should be the first generation of the immersion micro-lithography. And the brand-new lithography tool makes many optical effects, which can be ignored at 90nm and 65nm nodes, now have significant impact on the pattern transmission process from design to silicon. Among all the effects, one that needs to be pay attention to is the mask pellicle effect's impact on the critical dimension variation. With the implement of hyper-NA lithography tools, light transmits the mask pellicle vertically is not a good approximation now, and the image blurring induced by the mask pellicle should be taken into account in the computational microlithography. In this works, we investigate how the mask pellicle impacts the accuracy of the OPC model. And we will show that considering the extremely tight critical dimension control spec for 45nm generation node, to take the mask pellicle effect into the OPC model now becomes necessary.

  7. Faster and more accurate graphical model identification of tandem mass spectra using trellises

    PubMed Central

    Wang, Shengjie; Halloran, John T.; Bilmes, Jeff A.; Noble, William S.

    2016-01-01

    Tandem mass spectrometry (MS/MS) is the dominant high throughput technology for identifying and quantifying proteins in complex biological samples. Analysis of the tens of thousands of fragmentation spectra produced by an MS/MS experiment begins by assigning to each observed spectrum the peptide that is hypothesized to be responsible for generating the spectrum. This assignment is typically done by searching each spectrum against a database of peptides. To our knowledge, all existing MS/MS search engines compute scores individually between a given observed spectrum and each possible candidate peptide from the database. In this work, we use a trellis, a data structure capable of jointly representing a large set of candidate peptides, to avoid redundantly recomputing common sub-computations among different candidates. We show how trellises may be used to significantly speed up existing scoring algorithms, and we theoretically quantify the expected speedup afforded by trellises. Furthermore, we demonstrate that compact trellis representations of whole sets of peptides enables efficient discriminative learning of a dynamic Bayesian network for spectrum identification, leading to greatly improved spectrum identification accuracy. Contact: bilmes@uw.edu or william-noble@uw.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307634

  8. A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system

    PubMed Central

    Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob

    2013-01-01

    Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541

  9. Accurate programmable electrocardiogram generator using a dynamical model implemented on a microcontroller

    NASA Astrophysics Data System (ADS)

    Chien Chang, Jia-Ren; Tai, Cheng-Chi

    2006-07-01

    This article reports on the design and development of a complete, programmable electrocardiogram (ECG) generator, which can be used for the testing, calibration and maintenance of electrocardiograph equipment. A modified mathematical model, developed from the three coupled ordinary differential equations of McSharry et al. [IEEE Trans. Biomed. Eng. 50, 289, (2003)], was used to locate precisely the positions of the onset, termination, angle, and duration of individual components in an ECG. Generator facilities are provided so the user can adjust the signal amplitude, heart rate, QRS-complex slopes, and P- and T-wave settings. The heart rate can be adjusted in increments of 1BPM (beats per minute), from 20to176BPM, while the amplitude of the ECG signal can be set from 0.1to400mV with a 0.1mV resolution. Experimental results show that the proposed concept and the resulting system are feasible.

  10. Under proper control, oxidation of proteins with known chemical structure provides an accurate and absolute method for the determination of their molar concentration.

    PubMed

    Guermant, C; Azarkan, M; Smolders, N; Baeyens-Volant, D; Nijs, M; Paul, C; Brygier, J; Vincentelli, J; Looze, Y

    2000-01-01

    Oxidation at 120 degrees C of inorganic and organic (including amino acids, di- and tripeptides) model compounds by K(2)Cr(2)O(7) in the presence of H(2)SO(4) (mass fraction: 0.572), Ag(2)SO(4) (catalyst), and HgSO(4) results in the quantitative conversion of their C-atoms into CO(2) within 24 h or less. Under these stressed, well-defined conditions, the S-atoms present in cysteine and cystine residues are oxidized into SO(3) while, interestingly, the oxidation states of all the other (including the N-) atoms normally present in a protein do remain quite unchanged. When the chemical structure of a given protein is available, the total number of electrons the protein is able to transfer to K(2)Cr(2)O(7) and thereof, the total number of moles of Cr(3+) ions which the protein is able to generate upon oxidation can be accurately calculated. In such cases, unknown protein molar concentrations can thus be determined through straightforward spectrophotometric measurements of Cr(3+) concentrations. The values of molar absorption coefficients for several well-characterized proteins have been redetermined on this basis and observed to be in excellent agreement with the most precise values reported in the literature, which fully assesses the validity of the method. When applied to highly purified proteins of known chemical structure (more generally of known atomic composition), this method is absolute and accurate (+/-1%). Furthermore, it is well adapted to series measurements since available commercial kits for chemical oxygen demand (COD) measurements can readily be adapted to work under the experimental conditions recommended here for the protein assay. PMID:10610688

  11. Fold assessment for comparative protein structure modeling.

    PubMed

    Melo, Francisco; Sali, Andrej

    2007-11-01

    Accurate and automated assessment of both geometrical errors and incompleteness of comparative protein structure models is necessary for an adequate use of the models. Here, we describe a composite score for discriminating between models with the correct and incorrect fold. To find an accurate composite score, we designed and applied a genetic algorithm method that searched for a most informative subset of 21 input model features as well as their optimized nonlinear transformation into the composite score. The 21 input features included various statistical potential scores, stereochemistry quality descriptors, sequence alignment scores, geometrical descriptors, and measures of protein packing. The optimized composite score was found to depend on (1) a statistical potential z-score for residue accessibilities and distances, (2) model compactness, and (3) percentage sequence identity of the alignment used to build the model. The accuracy of the composite score was compared with the accuracy of assessment by single and combined features as well as by other commonly used assessment methods. The testing set was representative of models produced by automated comparative modeling on a genomic scale. The composite score performed better than any other tested score in terms of the maximum correct classification rate (i.e., 3.3% false positives and 2.5% false negatives) as well as the sensitivity and specificity across the whole range of thresholds. The composite score was implemented in our program MODELLER-8 and was used to assess models in the MODBASE database that contains comparative models for domains in approximately 1.3 million protein sequences.

  12. Communication: Accurate higher-order van der Waals coefficients between molecules from a model dynamic multipole polarizability

    NASA Astrophysics Data System (ADS)

    Tao, Jianmin; Rappe, Andrew M.

    2016-01-01

    Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C6 alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C8 and C10 between small molecules. We find that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C8 and 7% for C10. Inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry.

  13. Accurate analytical modelling of cosmic ray induced failure rates of power semiconductor devices

    NASA Astrophysics Data System (ADS)

    Bauer, Friedhelm D.

    2009-06-01

    A new, simple and efficient approach is presented to conduct estimations of the cosmic ray induced failure rate for high voltage silicon power devices early in the design phase. This allows combining common design issues such as device losses and safe operating area with the constraints imposed by the reliability to result in a better and overall more efficient design methodology. Starting from an experimental and theoretical background brought forth a few yeas ago [Kabza H et al. Cosmic radiation as a cause for power device failure and possible countermeasures. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 9-12, Zeller HR. Cosmic ray induced breakdown in high voltage semiconductor devices, microscopic model and possible countermeasures. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 339-40, and Matsuda H et al. Analysis of GTO failure mode during d.c. blocking. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 221-5], an exact solution of the failure rate integral is derived and presented in a form which lends itself to be combined with the results available from commercial semiconductor simulation tools. Hence, failure rate integrals can be obtained with relative ease for realistic two- and even three-dimensional semiconductor geometries. Two case studies relating to IGBT cell design and planar junction termination layout demonstrate the purpose of the method.

  14. Accurate Estimation of Protein Folding and Unfolding Times: Beyond Markov State Models.

    PubMed

    Suárez, Ernesto; Adelman, Joshua L; Zuckerman, Daniel M

    2016-08-01

    Because standard molecular dynamics (MD) simulations are unable to access time scales of interest in complex biomolecular systems, it is common to "stitch together" information from multiple shorter trajectories using approximate Markov state model (MSM) analysis. However, MSMs may require significant tuning and can yield biased results. Here, by analyzing some of the longest protein MD data sets available (>100 μs per protein), we show that estimators constructed based on exact non-Markovian (NM) principles can yield significantly improved mean first-passage times (MFPTs) for protein folding and unfolding. In some cases, MSM bias of more than an order of magnitude can be corrected when identical trajectory data are reanalyzed by non-Markovian approaches. The NM analysis includes "history" information, higher order time correlations compared to MSMs, that is available in every MD trajectory. The NM strategy is insensitive to fine details of the states used and works well when a fine time-discretization (i.e., small "lag time") is used. PMID:27340835

  15. Plastic hinge modeling of structures

    NASA Astrophysics Data System (ADS)

    Maruthayappan, Ramakrishnan

    1994-07-01

    The rapid changes taking place in vehicle design has resulted in a great variety of vehicles having different structural configurations. The behavior of such structures under impact or crash conditions demand an efficient modeling of the system. Since a vehicle crash is a dynamic phenomenon exhibiting complex interaction between structural and inertial forces, the model should be able to predict the transient deformations which range from small strain elastic deformations to large strain plastic deformations. Among various formulations and modeling techniques available, the plastic hinge theory offers a simple, realistic and a computationally efficient model. This thesis explores and applies the plastic hinge modeling technique to some simple structures ranging from an elementary cantilever beam to a torque box representing a passenger compartment. The nonlinear static and dynamic behavior of a general aviation seat model for different load cases have been predicted using this theory. A detailed study and application of nonlinear finite element technique has also been performed. A comparative study of the responses yielded by plastic hinge model and finite element model demonstrates the effectiveness and accuracy of the plastic hinge model in predicting the behavior of the structures reliably. This thesis also studies and predicts the dynamics of mechanical systems using plastic hinges modeled with flexible and rigid bodies with contact-impact and plastic deformation.

  16. Small pores in soils: Is the physico-chemical environment accurately reflected in biogeochemical models ?

    NASA Astrophysics Data System (ADS)

    Weber, Tobias K. D.; Riedel, Thomas

    2015-04-01

    Free water is a prerequesite to chemical reactions and biological activity in earth's upper crust essential to life. The void volume between the solid compounds provides space for water, air, and organisms that thrive on the consumption of minerals and organic matter thereby regulating soil carbon turnover. However, not all water in the pore space in soils and sediments is in its liquid state. This is a result of the adhesive forces which reduce the water activity in small pores and charged mineral surfaces. This water has a lower tendency to react chemically in solution as this additional binding energy lowers its activity. In this work, we estimated the amount of soil pore water that is thermodynamically different from a simple aqueous solution. The quantity of soil pore water with properties different to liquid water was found to systematically increase with increasing clay content. The significance of this is that the grain size and surface area apparently affects the thermodynamic state of water. This implies that current methods to determine the amount of water content, traditionally determined from bulk density or gravimetric water content after drying at 105°C overestimates the amount of free water in a soil especially at higher clay content. Our findings have consequences for biogeochemical processes in soils, e.g. nutrients may be contained in water which is not free which could enhance preservation. From water activity measurements on a set of various soils with 0 to 100 wt-% clay, we can show that 5 to 130 mg H2O per g of soil can generally be considered as unsuitable for microbial respiration. These results may therefore provide a unifying explanation for the grain size dependency of organic matter preservation in sedimentary environments and call for a revised view on the biogeochemical environment in soils and sediments. This could allow a different type of process oriented modelling.

  17. Material modeling and structural analysis with the microplane constitutive model

    NASA Astrophysics Data System (ADS)

    Brocca, Michele

    The microplane model is a versatile and powerful approach to constitutive modeling in which the stress-strain relations are defined in terms of vectors rather than tensors on planes of all possible orientations. Such planes are called the microplanes and are representative of the microstructure of the material. The microplane model with kinematic constraint has been successfully employed in the past in the modeling of concrete, soils, ice, rocks, fiber composites and other quasibrittle materials. The microplane model provides a powerful and efficient numerical and theoretical framework for the development and implementation of constitutive models for any kind of material. The dissertation presents a review of the background from which the microplane model stems, highlighting differences and similarities with other approaches. The basic structure of the microplane model is then presented, together with its extension to finite strain deformation. To show the effectiveness of the microplane model approach, some examples are given demonstrating applications of microplane models in structural analysis with the finite element method. Some new constitutive models are also introduced for materials characterized by very different properties and microstructures, showing that the approach is indeed very versatile and provides a robust basis for the study of a broad range of problems. New models are introduced for metal plasticity, shape memory alloys and cellular materials. The new models are compared quantitatively with the existing models and experimental data. In particular, the newly introduced microplane models for metal plasticity are compared with the classical J2-flow theory for incremental plasticity. An existing microplane model for concrete is employed in finite element analysis of the 'tube-squash' test, in which concrete undergoes very large deviatoric deformation, and of the size effect in compressive failure of concrete columns. The microplane model for shape

  18. Rolling mill optimization using an accurate and rapid new model for mill deflection and strip thickness profile

    NASA Astrophysics Data System (ADS)

    Malik, Arif Sultan

    This work presents improved technology for attaining high-quality rolled metal strip. The new technology is based on an innovative method to model both the static and dynamic characteristics of rolling mill deflection, and it applies equally to both cluster-type and non cluster-type rolling mill configurations. By effectively combining numerical Finite Element Analysis (FEA) with analytical solid mechanics, the devised approach delivers a rapid, accurate, flexible, high-fidelity model useful for optimizing many important rolling parameters. The associated static deflection model enables computation of the thickness profile and corresponding flatness of the rolled strip. Accurate methods of predicting the strip thickness profile and strip flatness are important in rolling mill design, rolling schedule set-up, control of mill flatness actuators, and optimization of ground roll profiles. The corresponding dynamic deflection model enables solution of the standard eigenvalue problem to determine natural frequencies and modes of vibration. The presented method for solving the roll-stack deflection problem offers several important advantages over traditional methods. In particular, it includes continuity of elastic foundations, non-iterative solution when using pre-determined elastic foundation moduli, continuous third-order displacement fields, simple stress-field determination, the ability to calculate dynamic characteristics, and a comparatively faster solution time. Consistent with the most advanced existing methods, the presented method accommodates loading conditions that represent roll crowning, roll bending, roll shifting, and roll crossing mechanisms. Validation of the static model is provided by comparing results and solution time with large-scale, commercial finite element simulations. In addition to examples with the common 4-high vertical stand rolling mill, application of the presented method to the most complex of rolling mill configurations is demonstrated

  19. Measuring and modelling the structure of chocolate

    NASA Astrophysics Data System (ADS)

    Le Révérend, Benjamin J. D.; Fryer, Peter J.; Smart, Ian; Bakalis, Serafim

    2015-01-01

    The cocoa butter present in chocolate exists as six different polymorphs. To achieve the desired crystal form (βV), traditional chocolate manufacturers use relatively slow cooling (<2°C/min). A newer generation of rapid cooling systems has been suggested requiring further understanding of fat crystallisation. To allow better control and understanding of these processes and newer rapid cooling processes, it is necessary to understand both heat transfer and crystallization kinetics. The proposed model aims to predict the temperature in the chocolate products during processing as well as the crystal structure of cocoa butter throughout the process. A set of ordinary differential equations describes the kinetics of fat crystallisation. The parameters were obtained by fitting the model to a set of DSC curves. The heat transfer equations were coupled to the kinetic model and solved using commercially available CFD software. A method using single crystal XRD was developed using a novel subtraction method to quantify the cocoa butter structure in chocolate directly and results were compared to the ones predicted from the model. The model was proven to predict phase change temperature during processing accurately (±1°C). Furthermore, it was possible to correctly predict phase changes and polymorphous transitions. The good agreement between the model and experimental data on the model geometry allows a better design and control of industrial processes.

  20. Accurate prediction model of bead geometry in crimping butt of the laser brazing using generalized regression neural network

    NASA Astrophysics Data System (ADS)

    Rong, Y. M.; Chang, Y.; Huang, Y.; Zhang, G. J.; Shao, X. Y.

    2015-12-01

    There are few researches that concentrate on the prediction of the bead geometry for laser brazing with crimping butt. This paper addressed the accurate prediction of the bead profile by developing a generalized regression neural network (GRNN) algorithm. Firstly GRNN model was developed and trained to decrease the prediction error that may be influenced by the sample size. Then the prediction accuracy was demonstrated by comparing with other articles and back propagation artificial neural network (BPNN) algorithm. Eventually the reliability and stability of GRNN model were discussed from the points of average relative error (ARE), mean square error (MSE) and root mean square error (RMSE), while the maximum ARE and MSE were 6.94% and 0.0303 that were clearly less than those (14.28% and 0.0832) predicted by BPNN. Obviously, it was proved that the prediction accuracy was improved at least 2 times, and the stability was also increased much more.

  1. Towards Efficient and Accurate Description of Many-Electron Problems: Developments of Static and Time-Dependent Electronic Structure Methods

    NASA Astrophysics Data System (ADS)

    Ding, Feizhi

    Understanding electronic behavior in molecular and nano-scale systems is fundamental to the development and design of novel technologies and materials for application in a variety of scientific contexts from fundamental research to energy conversion. This dissertation aims to provide insights into this goal by developing novel methods and applications of first-principle electronic structure theory. Specifically, we will present new methods and applications of excited state multi-electron dynamics based on the real-time (RT) time-dependent Hartree-Fock (TDHF) and time-dependent density functional theory (TDDFT) formalism, and new development of the multi-configuration self-consist field theory (MCSCF) for modeling ground-state electronic structure. The RT-TDHF/TDDFT based developments and applications can be categorized into three broad and coherently integrated research areas: (1) modeling of the interaction between moleculars and external electromagnetic perturbations. In this part we will first prove both analytically and numerically the gauge invariance of the TDHF/TDDFT formalisms, then we will present a novel, efficient method for calculating molecular nonlinear optical properties, and last we will study quantum coherent plasmon in metal namowires using RT-TDDFT; (2) modeling of excited-state charge transfer in molecules. In this part, we will investigate the mechanisms of bridge-mediated electron transfer, and then we will introduce a newly developed non-equilibrium quantum/continuum embedding method for studying charge transfer dynamics in solution; (3) developments of first-principles spin-dependent many-electron dynamics. In this part, we will present an ab initio non-relativistic spin dynamics method based on the two-component generalized Hartree-Fock approach, and then we will generalized it to the two-component TDDFT framework and combine it with the Ehrenfest molecular dynamics approach for modeling the interaction between electron spins and nuclear

  2. Accurate and efficient prediction of fine-resolution hydrologic and carbon dynamic simulations from coarse-resolution models

    NASA Astrophysics Data System (ADS)

    Pau, George Shu Heng; Shen, Chaopeng; Riley, William J.; Liu, Yaning

    2016-02-01

    The topography, and the biotic and abiotic parameters are typically upscaled to make watershed-scale hydrologic-biogeochemical models computationally tractable. However, upscaling procedure can produce biases when nonlinear interactions between different processes are not fully captured at coarse resolutions. Here we applied the Proper Orthogonal Decomposition Mapping Method (PODMM) to downscale the field solutions from a coarse (7 km) resolution grid to a fine (220 m) resolution grid. PODMM trains a reduced-order model (ROM) with coarse-resolution and fine-resolution solutions, here obtained using PAWS+CLM, a quasi-3-D watershed processes model that has been validated for many temperate watersheds. Subsequent fine-resolution solutions were approximated based only on coarse-resolution solutions and the ROM. The approximation errors were efficiently quantified using an error estimator. By jointly estimating correlated variables and temporally varying the ROM parameters, we further reduced the approximation errors by up to 20%. We also improved the method's robustness by constructing multiple ROMs using different set of variables, and selecting the best approximation based on the error estimator. The ROMs produced accurate downscaling of soil moisture, latent heat flux, and net primary production with O(1000) reduction in computational cost. The subgrid distributions were also nearly indistinguishable from the ones obtained using the fine-resolution model. Compared to coarse-resolution solutions, biases in upscaled ROM solutions were reduced by up to 80%. This method has the potential to help address the long-standing spatial scaling problem in hydrology and enable long-time integration, parameter estimation, and stochastic uncertainty analysis while accurately representing the heterogeneities.

  3. An evolutionary model-based algorithm for accurate phylogenetic breakpoint mapping and subtype prediction in HIV-1.

    PubMed

    Kosakovsky Pond, Sergei L; Posada, David; Stawiski, Eric; Chappey, Colombe; Poon, Art F Y; Hughes, Gareth; Fearnhill, Esther; Gravenor, Mike B; Leigh Brown, Andrew J; Frost, Simon D W

    2009-11-01

    Genetically diverse pathogens (such as Human Immunodeficiency virus type 1, HIV-1) are frequently stratified into phylogenetically or immunologically defined subtypes for classification purposes. Computational identification of such subtypes is helpful in surveillance, epidemiological analysis and detection of novel variants, e.g., circulating recombinant forms in HIV-1. A number of conceptually and technically different techniques have been proposed for determining the subtype of a query sequence, but there is not a universally optimal approach. We present a model-based phylogenetic method for automatically subtyping an HIV-1 (or other viral or bacterial) sequence, mapping the location of breakpoints and assigning parental sequences in recombinant strains as well as computing confidence levels for the inferred quantities. Our Subtype Classification Using Evolutionary ALgorithms (SCUEAL) procedure is shown to perform very well in a variety of simulation scenarios, runs in parallel when multiple sequences are being screened, and matches or exceeds the performance of existing approaches on typical empirical cases. We applied SCUEAL to all available polymerase (pol) sequences from two large databases, the Stanford Drug Resistance database and the UK HIV Drug Resistance Database. Comparing with subtypes which had previously been assigned revealed that a minor but substantial (approximately 5%) fraction of pure subtype sequences may in fact be within- or inter-subtype recombinants. A free implementation of SCUEAL is provided as a module for the HyPhy package and the Datamonkey web server. Our method is especially useful when an accurate automatic classification of an unknown strain is desired, and is positioned to complement and extend faster but less accurate methods. Given the increasingly frequent use of HIV subtype information in studies focusing on the effect of subtype on treatment, clinical outcome, pathogenicity and vaccine design, the importance of accurate

  4. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be

  5. Nonlinear Modeling of Joint Dominated Structures

    NASA Technical Reports Server (NTRS)

    Chapman, J. M.

    1990-01-01

    The development and verification of an accurate structural model of the nonlinear joint-dominated NASA Langley Mini-Mast truss are described. The approach is to characterize the structural behavior of the Mini-Mast joints and struts using a test configuration that can directly measure the struts' overall stiffness and damping properties, incorporate this data into the structural model using the residual force technique, and then compare the predicted response with empirical data taken by NASA/LaRC during the modal survey tests of the Mini-Mast. A new testing technique, referred to as 'link' testing, was developed and used to test prototype struts of the Mini-Masts. Appreciable nonlinearities including the free-play and hysteresis were demonstrated. Since static and dynamic tests performed on the Mini-Mast also exhibited behavior consistent with joints having free-play and hysteresis, nonlinear models of the Mini-Mast were constructed and analyzed. The Residual Force Technique was used to analyze the nonlinear model of the Mini-Mast having joint free-play and hysteresis.

  6. Geometric modeling of inflatable structures for lunar base.

    PubMed

    Nowak, P S; Sadeh, W Z; Morroni, L A

    1992-07-01

    A modular inflatable structure consisting of thin, composite membranes is presented for use in a lunar base. Results from a linear elastic analysis of the structure indicate that it is feasible in the lunar environment. Further analysis requires solving nonlinear equations and accurately specifying the geometries of the structural members. A computerized geometric modeling technique, using bicubic Bezier surfaces to generate the geometries of the inflatable structure, was conducted. Simulated results are used to create three-dimensional wire frames and solid renderings of the individual components of the inflatable structure. The component geometries are connected into modules, which are then assembled based upon the desired architecture of the structure. PMID:11537646

  7. SU-E-T-475: An Accurate Linear Model of Tomotherapy MLC-Detector System for Patient Specific Delivery QA

    SciTech Connect

    Chen, Y; Mo, X; Chen, M; Olivera, G; Parnell, D; Key, S; Lu, W; Reeher, M; Galmarini, D

    2014-06-01

    Purpose: An accurate leaf fluence model can be used in applications such as patient specific delivery QA and in-vivo dosimetry for TomoTherapy systems. It is known that the total fluence is not a linear combination of individual leaf fluence due to leakage-transmission, tongue-and-groove, and source occlusion effect. Here we propose a method to model the nonlinear effects as linear terms thus making the MLC-detector system a linear system. Methods: A leaf pattern basis (LPB) consisting of no-leaf-open, single-leaf-open, double-leaf-open and triple-leaf-open patterns are chosen to represent linear and major nonlinear effects of leaf fluence as a linear system. An arbitrary leaf pattern can be expressed as (or decomposed to) a linear combination of the LPB either pulse by pulse or weighted by dwelling time. The exit detector responses to the LPB are obtained by processing returned detector signals resulting from the predefined leaf patterns for each jaw setting. Through forward transformation, detector signal can be predicted given a delivery plan. An equivalent leaf open time (LOT) sinogram containing output variation information can also be inversely calculated from the measured detector signals. Twelve patient plans were delivered in air. The equivalent LOT sinograms were compared with their planned sinograms. Results: The whole calibration process was done in 20 minutes. For two randomly generated leaf patterns, 98.5% of the active channels showed differences within 0.5% of the local maximum between the predicted and measured signals. Averaged over the twelve plans, 90% of LOT errors were within +/−10 ms. The LOT systematic error increases and shows an oscillating pattern when LOT is shorter than 50 ms. Conclusion: The LPB method models the MLC-detector response accurately, which improves patient specific delivery QA and in-vivo dosimetry for TomoTherapy systems. It is sensitive enough to detect systematic LOT errors as small as 10 ms.

  8. Toward accurate modelling of the non-linear matter bispectrum: standard perturbation theory and transients from initial conditions

    NASA Astrophysics Data System (ADS)

    McCullagh, Nuala; Jeong, Donghui; Szalay, Alexander S.

    2016-01-01

    Accurate modelling of non-linearities in the galaxy bispectrum, the Fourier transform of the galaxy three-point correlation function, is essential to fully exploit it as a cosmological probe. In this paper, we present numerical and theoretical challenges in modelling the non-linear bispectrum. First, we test the robustness of the matter bispectrum measured from N-body simulations using different initial conditions generators. We run a suite of N-body simulations using the Zel'dovich approximation and second-order Lagrangian perturbation theory (2LPT) at different starting redshifts, and find that transients from initial decaying modes systematically reduce the non-linearities in the matter bispectrum. To achieve 1 per cent accuracy in the matter bispectrum at z ≤ 3 on scales k < 1 h Mpc-1, 2LPT initial conditions generator with initial redshift z ≳ 100 is required. We then compare various analytical formulas and empirical fitting functions for modelling the non-linear matter bispectrum, and discuss the regimes for which each is valid. We find that the next-to-leading order (one-loop) correction from standard perturbation theory matches with N-body results on quasi-linear scales for z ≥ 1. We find that the fitting formula in Gil-Marín et al. accurately predicts the matter bispectrum for z ≤ 1 on a wide range of scales, but at higher redshifts, the fitting formula given in Scoccimarro & Couchman gives the best agreement with measurements from N-body simulations.

  9. A homotopy-based sparse representation for fast and accurate shape prior modeling in liver surgical planning.

    PubMed

    Wang, Guotai; Zhang, Shaoting; Xie, Hongzhi; Metaxas, Dimitris N; Gu, Lixu

    2015-01-01

    Shape prior plays an important role in accurate and robust liver segmentation. However, liver shapes have complex variations and accurate modeling of liver shapes is challenging. Using large-scale training data can improve the accuracy but it limits the computational efficiency. In order to obtain accurate liver shape priors without sacrificing the efficiency when dealing with large-scale training data, we investigate effective and scalable shape prior modeling method that is more applicable in clinical liver surgical planning system. We employed the Sparse Shape Composition (SSC) to represent liver shapes by an optimized sparse combination of shapes in the repository, without any assumptions on parametric distributions of liver shapes. To leverage large-scale training data and improve the computational efficiency of SSC, we also introduced a homotopy-based method to quickly solve the L1-norm optimization problem in SSC. This method takes advantage of the sparsity of shape modeling, and solves the original optimization problem in SSC by continuously transforming it into a series of simplified problems whose solution is fast to compute. When new training shapes arrive gradually, the homotopy strategy updates the optimal solution on the fly and avoids re-computing it from scratch. Experiments showed that SSC had a high accuracy and efficiency in dealing with complex liver shape variations, excluding gross errors and preserving local details on the input liver shape. The homotopy-based SSC had a high computational efficiency, and its runtime increased very slowly when repository's capacity and vertex number rose to a large degree. When repository's capacity was 10,000, with 2000 vertices on each shape, homotopy method cost merely about 11.29 s to solve the optimization problem in SSC, nearly 2000 times faster than interior point method. The dice similarity coefficient (DSC), average symmetric surface distance (ASD), and maximum symmetric surface distance measurement

  10. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    PubMed

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-02-24

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  11. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    PubMed Central

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  12. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    PubMed

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  13. Towards an accurate model of redshift-space distortions: a bivariate Gaussian description for the galaxy pairwise velocity distributions

    NASA Astrophysics Data System (ADS)

    Bianchi, Davide; Chiesa, Matteo; Guzzo, Luigi

    2016-10-01

    As a step towards a more accurate modelling of redshift-space distortions (RSD) in galaxy surveys, we develop a general description of the probability distribution function of galaxy pairwise velocities within the framework of the so-called streaming model. For a given galaxy separation , such function can be described as a superposition of virtually infinite local distributions. We characterize these in terms of their moments and then consider the specific case in which they are Gaussian functions, each with its own mean μ and variance σ2. Based on physical considerations, we make the further crucial assumption that these two parameters are in turn distributed according to a bivariate Gaussian, with its own mean and covariance matrix. Tests using numerical simulations explicitly show that with this compact description one can correctly model redshift-space distorsions on all scales, fully capturing the overall linear and nonlinear dynamics of the galaxy flow at different separations. In particular, we naturally obtain Gaussian/exponential, skewed/unskewed distribution functions, depending on separation as observed in simulations and data. Also, the recently proposed single-Gaussian description of redshift-space distortions is included in this model as a limiting case, when the bivariate Gaussian is collapsed to a two-dimensional Dirac delta function. More work is needed, but these results indicate a very promising path to make definitive progress in our program to improve RSD estimators.

  14. Model reduction for flexible structures

    NASA Technical Reports Server (NTRS)

    Gawronski, Wodek; Juang, Jer-Nan

    1990-01-01

    Several conditions for a near-optimal reduction of general dynamic systems are presented focusing on the reduction in balanced and modal coordinates. It is shown that model and balanced reductions give very different results for the flexible structure with closely-spaced natural frequencies. In general, balanced reduction is found to give better results. A robust model reduction technique was developed to study the sensitivity of modeling error to variations in the damping of a structure. New concepts of grammians defined over a finite time and/or a frequency interval are proposed including computational procedures for evaluating them. Application of the model reduction technique to these grammians is considered to lead to a near-optimal reduced model which closely reproduces the full system output in the time and/or frequency interval.

  15. High Fidelity Non-Gravitational Force Models for Precise and Accurate Orbit Determination of TerraSAR-X

    NASA Astrophysics Data System (ADS)

    Hackel, Stefan; Montenbruck, Oliver; Steigenberger, -Peter; Eineder, Michael; Gisinger, Christoph

    Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The increasing demand for precise radar products relies on sophisticated validation methods, which require precise and accurate orbit products. Basically, the precise reconstruction of the satellite’s trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency receiver onboard the spacecraft. The Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for the gravitational and non-gravitational forces. Following a proper analysis of the orbit quality, systematics in the orbit products have been identified, which reflect deficits in the non-gravitational force models. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). Due to the dusk-dawn orbit configuration of TerraSAR-X, the satellite is almost constantly illuminated by the Sun. Therefore, the direct SRP has an effect on the lateral stability of the determined orbit. The indirect effect of the solar radiation principally contributes to the Earth Radiation Pressure (ERP). The resulting force depends on the sunlight, which is reflected by the illuminated Earth surface in the visible, and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed within the presentation. The presentation highlights the influence of non-gravitational force and satellite macro models on the orbit quality of TerraSAR-X.

  16. Mathematical Models for Elastic Structures

    NASA Astrophysics Data System (ADS)

    Villaggio, Piero

    1997-10-01

    During the seventeenth century, several useful theories of elastic structures emerged, with applications to civil and mechanical engineering problems. Recent and improved mathematical tools have extended applications into new areas such as mathematical physics, geomechanics, and biomechanics. This book offers a critically filtered collection of the most significant theories dealing with elastic slender bodies. It includes mathematical models involving elastic structures that are used to solve practical problems with particular emphasis on nonlinear problems.

  17. Fast and accurate global multiphase arrival tracking: the irregular shortest-path method in a 3-D spherical earth model

    NASA Astrophysics Data System (ADS)

    Huang, Guo-Jiao; Bai, Chao-Ying; Greenhalgh, Stewart

    2013-09-01

    The traditional grid/cell-based wavefront expansion algorithms, such as the shortest path algorithm, can only find the first arrivals or multiply reflected (or mode converted) waves transmitted from subsurface interfaces, but cannot calculate the other later reflections/conversions having a minimax time path. In order to overcome the above limitations, we introduce the concept of a stationary minimax time path of Fermat's Principle into the multistage irregular shortest path method. Here we extend it from Cartesian coordinates for a flat earth model to global ray tracing of multiple phases in a 3-D complex spherical earth model. The ray tracing results for 49 different kinds of crustal, mantle and core phases show that the maximum absolute traveltime error is less than 0.12 s and the average absolute traveltime error is within 0.09 s when compared with the AK135 theoretical traveltime tables for a 1-D reference model. Numerical tests in terms of computational accuracy and CPU time consumption indicate that the new scheme is an accurate, efficient and a practical way to perform 3-D multiphase arrival tracking in regional or global traveltime tomography.

  18. Numerical simulation of pharyngeal airflow applied to obstructive sleep apnea: effect of the nasal cavity in anatomically accurate airway models.

    PubMed

    Cisonni, Julien; Lucey, Anthony D; King, Andrew J C; Islam, Syed Mohammed Shamsul; Lewis, Richard; Goonewardene, Mithran S

    2015-11-01

    Repetitive brief episodes of soft-tissue collapse within the upper airway during sleep characterize obstructive sleep apnea (OSA), an extremely common and disabling disorder. Failure to maintain the patency of the upper airway is caused by the combination of sleep-related loss of compensatory dilator muscle activity and aerodynamic forces promoting closure. The prediction of soft-tissue movement in patient-specific airway 3D mechanical models is emerging as a useful contribution to clinical understanding and decision making. Such modeling requires reliable estimations of the pharyngeal wall pressure forces. While nasal obstruction has been recognized as a risk factor for OSA, the need to include the nasal cavity in upper-airway models for OSA studies requires consideration, as it is most often omitted because of its complex shape. A quantitative analysis of the flow conditions generated by the nasal cavity and the sinuses during inspiration upstream of the pharynx is presented. Results show that adequate velocity boundary conditions and simple artificial extensions of the flow domain can reproduce the essential effects of the nasal cavity on the pharyngeal flow field. Therefore, the overall complexity and computational cost of accurate flow predictions can be reduced.

  19. Modelling structured data with Probabilistic Graphical Models

    NASA Astrophysics Data System (ADS)

    Forbes, F.

    2016-05-01

    Most clustering and classification methods are based on the assumption that the objects to be clustered are independent. However, in more and more modern applications, data are structured in a way that makes this assumption not realistic and potentially misleading. A typical example that can be viewed as a clustering task is image segmentation where the objects are the pixels on a regular grid and depend on neighbouring pixels on this grid. Also, when data are geographically located, it is of interest to cluster data with an underlying dependence structure accounting for some spatial localisation. These spatial interactions can be naturally encoded via a graph not necessarily regular as a grid. Data sets can then be modelled via Markov random fields and mixture models (e.g. the so-called MRF and Hidden MRF). More generally, probabilistic graphical models are tools that can be used to represent and manipulate data in a structured way while modeling uncertainty. This chapter introduces the basic concepts. The two main classes of probabilistic graphical models are considered: Bayesian networks and Markov networks. The key concept of conditional independence and its link to Markov properties is presented. The main problems that can be solved with such tools are described. Some illustrations are given associated with some practical work.

  20. Structural modeling of sandwich structures with lightweight cellular cores

    NASA Astrophysics Data System (ADS)

    Liu, T.; Deng, Z. C.; Lu, T. J.

    2007-10-01

    An effective single layered finite element (FE) computational model is proposed to predict the structural behavior of lightweight sandwich panels having two dimensional (2D) prismatic or three dimensional (3D) truss cores. Three different types of cellular core topology are considered: pyramidal truss core (3D), Kagome truss core (3D) and corrugated core (2D), representing three kinds of material anisotropy: orthotropic, monoclinic and general anisotropic. A homogenization technique is developed to obtain the homogenized macroscopic stiffness properties of the cellular core. In comparison with the results obtained by using detailed FE model, the single layered computational model can give acceptable predictions for both the static and dynamic behaviors of orthotropic truss core sandwich panels. However, for non-orthotropic 3D truss cores, the predictions are not so well. For both static and dynamic behaviors of a 2D corrugated core sandwich panel, the predictions derived by the single layered computational model is generally acceptable when the size of the unit cell varies within a certain range, with the predictions for moderately strong or strong corrugated cores more accurate than those for weak cores.

  1. Accurate crystal-structure refinement of Ca3Ga2Ge4O14 at 295 and 100 K and analysis of the disorder in the atomic positions

    NASA Astrophysics Data System (ADS)

    Dudka, A. P.; Mill', B. V.

    2013-07-01

    The accurate X-ray diffraction study of a Ca3Ga2Ge4O14 crystal (sp. gr. P321, Z = 1) has been performed using repeated X-ray diffraction data sets collected on a diffractometer equipped with a CCD area detector at 295 and 100 K. The asymmetric disorder in the atomic positions in Ca3Ga2Ge4O14 is described in two alternative ways: with the use of anharmonic atomic displacements (at 295 K R/wR = 0.68/0.60%, 3754 reflections; at 100 K R/wR = 0.90/0.70%, 3632 reflections) and using a split model (SM) (at 295 K R/wR = 0.74/0.67%; at 100 K R/wR = 0.95/0.74%). An analysis of the probability density function that defines the probability of finding an atom at a particular point in space shows that, at 295 K, five of the seven independent atoms in the unit cell are asymmetrically disordered in the vicinity of their sites, whereas only three atoms are disordered at 100 K. At both temperatures the largest disorder is observed at the 3 f site on a twofold axis, which is a prerequisite for the formation of helicoidal chains of atoms along the c axis of the crystal and can serve as a structural basis for multiferroic properties of this family of crystals with magnetic ions.

  2. CONDENSED MATTER: STRUCTURE, MECHANICAL AND THERMAL PROPERTIES: An Accurate Image Simulation Method for High-Order Laue Zone Effects

    NASA Astrophysics Data System (ADS)

    Cai, Can-Ying; Zeng, Song-Jun; Liu, Hong-Rong; Yang, Qi-Bin

    2008-05-01

    A completely different formulation for simulation of the high order Laue zone (HOLZ) diffractions is derived. It refers to the new method, i.e. the Taylor series (TS) method. To check the validity and accuracy of the TS method, we take polyvinglidene fluoride (PVDF) crystal as an example to calculate the exit wavefunction by the conventional multi-slice (CMS) method and the TS method. The calculated results show that the TS method is much more accurate than the CMS method and is independent of the slice thicknesses. Moreover, the pure first order Laue zone wavefunction by the TS method can reflect the major potential distribution of the first reciprocal plane.

  3. The use of sparse CT datasets for auto-generating accurate FE models of the femur and pelvis.

    PubMed

    Shim, Vickie B; Pitto, Rocco P; Streicher, Robert M; Hunter, Peter J; Anderson, Iain A

    2007-01-01

    The finite element (FE) method when coupled with computed tomography (CT) is a powerful tool in orthopaedic biomechanics. However, substantial data is required for patient-specific modelling. Here we present a new method for generating a FE model with a minimum amount of patient data. Our method uses high order cubic Hermite basis functions for mesh generation and least-square fits the mesh to the dataset. We have tested our method on seven patient data sets obtained from CT assisted osteodensitometry of the proximal femur. Using only 12 CT slices we generated smooth and accurate meshes of the proximal femur with a geometric root mean square (RMS) error of less than 1 mm and peak errors less than 8 mm. To model the complex geometry of the pelvis we developed a hybrid method which supplements sparse patient data with data from the visible human data set. We tested this method on three patient data sets, generating FE meshes of the pelvis using only 10 CT slices with an overall RMS error less than 3 mm. Although we have peak errors about 12 mm in these meshes, they occur relatively far from the region of interest (the acetabulum) and will have minimal effects on the performance of the model. Considering that linear meshes usually require about 70-100 pelvic CT slices (in axial mode) to generate FE models, our method has brought a significant data reduction to the automatic mesh generation step. The method, that is fully automated except for a semi-automatic bone/tissue boundary extraction part, will bring the benefits of FE methods to the clinical environment with much reduced radiation risks and data requirement.

  4. Do inverse ecosystem models accurately reconstruct plankton trophic flows? Comparing two solution methods using field data from the California Current

    NASA Astrophysics Data System (ADS)

    Stukel, Michael R.; Landry, Michael R.; Ohman, Mark D.; Goericke, Ralf; Samo, Ty; Benitez-Nelson, Claudia R.

    2012-03-01

    Despite the increasing use of linear inverse modeling techniques to elucidate fluxes in undersampled marine ecosystems, the accuracy with which they estimate food web flows has not been resolved. New Markov Chain Monte Carlo (MCMC) solution methods have also called into question the biases of the commonly used L2 minimum norm (L 2MN) solution technique. Here, we test the abilities of MCMC and L 2MN methods to recover field-measured ecosystem rates that are sequentially excluded from the model input. For data, we use experimental measurements from process cruises of the California Current Ecosystem (CCE-LTER) Program that include rate estimates of phytoplankton and bacterial production, micro- and mesozooplankton grazing, and carbon export from eight study sites varying from rich coastal upwelling to offshore oligotrophic conditions. Both the MCMC and L 2MN methods predicted well-constrained rates of protozoan and mesozooplankton grazing with reasonable accuracy, but the MCMC method overestimated primary production. The MCMC method more accurately predicted the poorly constrained rate of vertical carbon export than the L 2MN method, which consistently overestimated export. Results involving DOC and bacterial production were equivocal. Overall, when primary production is provided as model input, the MCMC method gives a robust depiction of ecosystem processes. Uncertainty in inverse ecosystem models is large and arises primarily from solution under-determinacy. We thus suggest that experimental programs focusing on food web fluxes expand the range of experimental measurements to include the nature and fate of detrital pools, which play large roles in the model.

  5. Prognostic breast cancer signature identified from 3D culture model accurately predicts clinical outcome across independent datasets

    SciTech Connect

    Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.

    2008-10-20

    One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic

  6. Theory of bi-molecular association dynamics in 2D for accurate model and experimental parameterization of binding rates

    NASA Astrophysics Data System (ADS)

    Yogurtcu, Osman N.; Johnson, Margaret E.

    2015-08-01

    The dynamics of association between diffusing and reacting molecular species are routinely quantified using simple rate-equation kinetics that assume both well-mixed concentrations of species and a single rate constant for parameterizing the binding rate. In two-dimensions (2D), however, even when systems are well-mixed, the assumption of a single characteristic rate constant for describing association is not generally accurate, due to the properties of diffusional searching in dimensions d ≤ 2. Establishing rigorous bounds for discriminating between 2D reactive systems that will be accurately described by rate equations with a single rate constant, and those that will not, is critical for both modeling and experimentally parameterizing binding reactions restricted to surfaces such as cellular membranes. We show here that in regimes of intrinsic reaction rate (ka) and diffusion (D) parameters ka/D > 0.05, a single rate constant cannot be fit to the dynamics of concentrations of associating species independently of the initial conditions. Instead, a more sophisticated multi-parametric description than rate-equations is necessary to robustly characterize bimolecular reactions from experiment. Our quantitative bounds derive from our new analysis of 2D rate-behavior predicted from Smoluchowski theory. Using a recently developed single particle reaction-diffusion algorithm we extend here to 2D, we are able to test and validate the predictions of Smoluchowski theory and several other theories of reversible reaction dynamics in 2D for the first time. Finally, our results also mean that simulations of reactive systems in 2D using rate equations must be undertaken with caution when reactions have ka/D > 0.05, regardless of the simulation volume. We introduce here a simple formula for an adaptive concentration dependent rate constant for these chemical kinetics simulations which improves on existing formulas to better capture non-equilibrium reaction dynamics from dilute

  7. Hypersonic vehicle structural weight prediction using parametric modeling, finite element modeling, and structural optimization

    NASA Astrophysics Data System (ADS)

    Ngo, Dung A.; Koshiba, David A.; Moses, Paul L.

    1993-04-01

    Detailed structural analysis/optimization is required in the conceptual design stage because of the combination of aerodynamic and aerothermodynamic environment. This is a time and manpower consuming activity which is exasperated by constant vehicle moldline changes as a configuration matures. A simple parametric math model is presented that takes into consideration static loads and the geometry and structural weight of a baseline hypersonic vehicle in predicting the structural weight of a new configuration scaled from the baseline. The approach in developing the math model was to consider a generic parametric cross-sectional geometry that could be used to approximate the baseline geometry and to predict the behavior of this baselne when it is scaled to provide performance and design benefits. This mathematical model, calibrated to finite element analysis and structural optimization sizing results, provides accurate weight prediction for a new configuration which has been moderately scaled from a thoroughly analyzed baseline configuration. This paper will present the structural optimization weight results and the math model weight predictions for a baseline configuration and 15 scaled configurations.

  8. An accurate numerical solution to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in rivers

    NASA Astrophysics Data System (ADS)

    Stecca, Guglielmo; Siviglia, Annunziato; Blom, Astrid

    2016-07-01

    We present an accurate numerical approximation to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in one space dimension. Our solution procedure originates from the fully-unsteady matrix-vector formulation developed in [54]. The principal part of the problem is solved by an explicit Finite Volume upwind method of the path-conservative type, by which all the variables are updated simultaneously in a coupled fashion. The solution to the principal part is embedded into a splitting procedure for the treatment of frictional source terms. The numerical scheme is extended to second-order accuracy and includes a bookkeeping procedure for handling the evolution of size stratification in the substrate. We develop a concept of balancedness for the vertical mass flux between the substrate and active layer under bed degradation, which prevents the occurrence of non-physical oscillations in the grainsize distribution of the substrate. We suitably modify the numerical scheme to respect this principle. We finally verify the accuracy in our solution to the equations, and its ability to reproduce one-dimensional morphodynamics due to streamwise and vertical sorting, using three test cases. In detail, (i) we empirically assess the balancedness of vertical mass fluxes under degradation; (ii) we study the convergence to the analytical linearised solution for the propagation of infinitesimal-amplitude waves [54], which is here employed for the first time to assess a mixed-sediment model; (iii) we reproduce Ribberink's E8-E9 flume experiment [46].

  9. A fast and accurate implementation of tunable algorithms used for generation of fractal-like aggregate models

    NASA Astrophysics Data System (ADS)

    Skorupski, Krzysztof; Mroczka, Janusz; Wriedt, Thomas; Riefler, Norbert

    2014-06-01

    In many branches of science experiments are expensive, require specialist equipment or are very time consuming. Studying the light scattering phenomenon by fractal aggregates can serve as an example. Light scattering simulations can overcome these problems and provide us with theoretical, additional data which complete our study. For this reason a fractal-like aggregate model as well as fast aggregation codes are needed. Until now various computer models, that try to mimic the physics behind this phenomenon, have been developed. However, their implementations are mostly based on a trial-and-error procedure. Such approach is very time consuming and the morphological parameters of resulting aggregates are not exact because the postconditions (e.g. the position error) cannot be very strict. In this paper we present a very fast and accurate implementation of a tunable aggregation algorithm based on the work of Filippov et al. (2000). Randomization is reduced to its necessary minimum (our technique can be more than 1000 times faster than standard algorithms) and the position of a new particle, or a cluster, is calculated with algebraic methods. Therefore, the postconditions can be extremely strict and the resulting errors negligible (e.g. the position error can be recognized as non-existent). In our paper two different methods, which are based on the particle-cluster (PC) and the cluster-cluster (CC) aggregation processes, are presented.

  10. X-ray and microwave emissions from the July 19, 2012 solar flare: Highly accurate observations and kinetic models

    NASA Astrophysics Data System (ADS)

    Gritsyk, P. A.; Somov, B. V.

    2016-08-01

    The M7.7 solar flare of July 19, 2012, at 05:58 UT was observed with high spatial, temporal, and spectral resolutions in the hard X-ray and optical ranges. The flare occurred at the solar limb, which allowed us to see the relative positions of the coronal and chromospheric X-ray sources and to determine their spectra. To explain the observations of the coronal source and the chromospheric one unocculted by the solar limb, we apply an accurate analytical model for the kinetic behavior of accelerated electrons in a flare. We interpret the chromospheric hard X-ray source in the thick-target approximation with a reverse current and the coronal one in the thin-target approximation. Our estimates of the slopes of the hard X-ray spectra for both sources are consistent with the observations. However, the calculated intensity of the coronal source is lower than the observed one by several times. Allowance for the acceleration of fast electrons in a collapsing magnetic trap has enabled us to remove this contradiction. As a result of our modeling, we have estimated the flux density of the energy transferred by electrons with energies above 15 keV to be ˜5 × 1010 erg cm-2 s-1, which exceeds the values typical of the thick-target model without a reverse current by a factor of ˜5. To independently test the model, we have calculated the microwave spectrum in the range 1-50 GHz that corresponds to the available radio observations.

  11. PSSP-RFE: Accurate Prediction of Protein Structural Class by Recursive Feature Extraction from PSI-BLAST Profile, Physical-Chemical Property and Functional Annotations

    PubMed Central

    Yu, Sanjiu; Zhang, Yuan; Luo, Zhong; Yang, Hua; Zhou, Yue; Zheng, Xiaoqi

    2014-01-01

    Protein structure prediction is critical to functional annotation of the massively accumulated biological sequences, which prompts an imperative need for the development of high-throughput technologies. As a first and key step in protein structure prediction, protein structural class prediction becomes an increasingly challenging task. Amongst most homological-based approaches, the accuracies of protein structural class prediction are sufficiently high for high similarity datasets, but still far from being satisfactory for low similarity datasets, i.e., below 40% in pairwise sequence similarity. Therefore, we present a novel method for accurate and reliable protein structural class prediction for both high and low similarity datasets. This method is based on Support Vector Machine (SVM) in conjunction with integrated features from position-specific score matrix (PSSM), PROFEAT and Gene Ontology (GO). A feature selection approach, SVM-RFE, is also used to rank the integrated feature vectors through recursively removing the feature with the lowest ranking score. The definitive top features selected by SVM-RFE are input into the SVM engines to predict the structural class of a query protein. To validate our method, jackknife tests were applied to seven widely used benchmark datasets, reaching overall accuracies between 84.61% and 99.79%, which are significantly higher than those achieved by state-of-the-art tools. These results suggest that our method could serve as an accurate and cost-effective alternative to existing methods in protein structural classification, especially for low similarity datasets. PMID:24675610

  12. Multiple lesion track structure model

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Cucinotta, Francis A.; Shinn, Judy L.

    1992-01-01

    A multilesion cell kinetic model is derived, and radiation kinetic coefficients are related to the Katz track structure model. The repair-related coefficients are determined from the delayed plating experiments of Yang et al. for the C3H10T1/2 cell system. The model agrees well with the x ray and heavy ion experiments of Yang et al. for the immediate plating, delaying plating, and fractionated exposure protocols employed by Yang. A study is made of the effects of target fragments in energetic proton exposures and of the repair-deficient target-fragment-induced lesions.

  13. Use of secondary structural information and C alpha-C alpha distance restraints to model protein structures with MODELLER.

    PubMed

    Reddy, Boojala V B; Kaznessis, Yiannis N

    2007-08-01

    Protein secondary structure predictions and amino acid long range contact map predictions from primary sequence of proteins have been explored to aid in modelling protein tertiary structures. In order to evaluate the usefulness of secondary structure and 3D-residue contact prediction methods to model protein structures we have used the known Q3 (alpha-helix,beta-strands and irregular turns/loops) secondary structure information, along with residue-residue contact information as restraints for MODELLER. We present here results of our modelling studies on 30 best resolved single domain protein structures of varied lengths. The results shows that it is very difficult to obtain useful models even with 100% accurate secondary structure predictions and accurate residue contact predictions for up to 30% of residues in a sequence. The best models that we obtained for proteins of lengths 37, 70, 118, 136 and 193 amino acid residues are of RMSDs 4.17, 5.27, 9.12, 7.89 and 9.69,respectively. The results show that one can obtain better models for the proteins which have high percent of alpha-helix content. This analysis further shows that MODELLER restrain optimization program can be useful only if we have truly homologous structure(s) as a template where it derives numerous restraints, almost identical to the templates used. This analysis also clearly indicates that even if we satisfy several true residue-residue contact distances, up to 30%of their sequence length with fully known secondary structural information, we end up predicting model structures much distant from their corresponding native structures.

  14. Structures in Molecular Clouds: Modeling

    SciTech Connect

    Kane, J O; Mizuta, A; Pound, M W; Remington, B A; Ryutov, D D

    2006-04-20

    We attempt to predict the observed morphology, column density and velocity gradient of Pillar II of the Eagle Nebula, using Rayleigh Taylor (RT) models in which growth is seeded by an initial perturbation in density or in shape of the illuminated surface, and cometary models in which structure is arises from a initially spherical cloud with a dense core. Attempting to mitigate suppression of RT growth by recombination, we use a large cylindrical model volume containing the illuminating source and the self-consistently evolving ablated outflow and the photon flux field, and use initial clouds with finite lateral extent. An RT model shows no growth, while a cometary model appears to be more successful at reproducing observations.

  15. Performance evaluation of ocean color satellite models for deriving accurate chlorophyll estimates in the Gulf of Saint Lawrence

    NASA Astrophysics Data System (ADS)

    Montes-Hugo, M.; Bouakba, H.; Arnone, R.

    2014-06-01

    The understanding of phytoplankton dynamics in the Gulf of the Saint Lawrence (GSL) is critical for managing major fisheries off the Canadian East coast. In this study, the accuracy of two atmospheric correction techniques (NASA standard algorithm, SA, and Kuchinke's spectral optimization, KU) and three ocean color inversion models (Carder's empirical for SeaWiFS (Sea-viewing Wide Field-of-View Sensor), EC, Lee's quasi-analytical, QAA, and Garver- Siegel-Maritorena semi-empirical, GSM) for estimating the phytoplankton absorption coefficient at 443 nm (aph(443)) and the chlorophyll concentration (chl) in the GSL is examined. Each model was validated based on SeaWiFS images and shipboard measurements obtained during May of 2000 and April 2001. In general, aph(443) estimates derived from coupling KU and QAA models presented the smallest differences with respect to in situ determinations as measured by High Pressure liquid Chromatography measurements (median absolute bias per cruise up to 0.005, RMSE up to 0.013). A change on the inversion approach used for estimating aph(443) values produced up to 43.4% increase on prediction error as inferred from the median relative bias per cruise. Likewise, the impact of applying different atmospheric correction schemes was secondary and represented an additive error of up to 24.3%. By using SeaDAS (SeaWiFS Data Analysis System) default values for the optical cross section of phytoplankton (i.e., aph(443) = aph(443)/chl = 0.056 m2mg-1), the median relative bias of our chl estimates as derived from the most accurate spaceborne aph(443) retrievals and with respect to in situ determinations increased up to 29%.

  16. Blast-induced biomechanical loading of the rat: an experimental and anatomically accurate computational blast injury model.

    PubMed

    Sundaramurthy, Aravind; Alai, Aaron; Ganpule, Shailesh; Holmberg, Aaron; Plougonven, Erwan; Chandra, Namas

    2012-09-01

    Blast waves generated by improvised explosive devices (IEDs) cause traumatic brain injury (TBI) in soldiers and civilians. In vivo animal models that use shock tubes are extensively used in laboratories to simulate field conditions, to identify mechanisms of injury, and to develop injury thresholds. In this article, we place rats in different locations along the length of the shock tube (i.e., inside, outside, and near the exit), to examine the role of animal placement location (APL) in the biomechanical load experienced by the animal. We found that the biomechanical load on the brain and internal organs in the thoracic cavity (lungs and heart) varied significantly depending on the APL. When the specimen is positioned outside, organs in the thoracic cavity experience a higher pressure for a longer duration, in contrast to APL inside the shock tube. This in turn will possibly alter the injury type, severity, and lethality. We found that the optimal APL is where the Friedlander waveform is first formed inside the shock tube. Once the optimal APL was determined, the effect of the incident blast intensity on the surface and intracranial pressure was measured and analyzed. Noticeably, surface and intracranial pressure increases linearly with the incident peak overpressures, though surface pressures are significantly higher than the other two. Further, we developed and validated an anatomically accurate finite element model of the rat head. With this model, we determined that the main pathway of pressure transmission to the brain was through the skull and not through the snout; however, the snout plays a secondary role in diffracting the incoming blast wave towards the skull.

  17. Blast-induced biomechanical loading of the rat: an experimental and anatomically accurate computational blast injury model.

    PubMed

    Sundaramurthy, Aravind; Alai, Aaron; Ganpule, Shailesh; Holmberg, Aaron; Plougonven, Erwan; Chandra, Namas

    2012-09-01

    Blast waves generated by improvised explosive devices (IEDs) cause traumatic brain injury (TBI) in soldiers and civilians. In vivo animal models that use shock tubes are extensively used in laboratories to simulate field conditions, to identify mechanisms of injury, and to develop injury thresholds. In this article, we place rats in different locations along the length of the shock tube (i.e., inside, outside, and near the exit), to examine the role of animal placement location (APL) in the biomechanical load experienced by the animal. We found that the biomechanical load on the brain and internal organs in the thoracic cavity (lungs and heart) varied significantly depending on the APL. When the specimen is positioned outside, organs in the thoracic cavity experience a higher pressure for a longer duration, in contrast to APL inside the shock tube. This in turn will possibly alter the injury type, severity, and lethality. We found that the optimal APL is where the Friedlander waveform is first formed inside the shock tube. Once the optimal APL was determined, the effect of the incident blast intensity on the surface and intracranial pressure was measured and analyzed. Noticeably, surface and intracranial pressure increases linearly with the incident peak overpressures, though surface pressures are significantly higher than the other two. Further, we developed and validated an anatomically accurate finite element model of the rat head. With this model, we determined that the main pathway of pressure transmission to the brain was through the skull and not through the snout; however, the snout plays a secondary role in diffracting the incoming blast wave towards the skull. PMID:22620716

  18. Computed-tomography-based finite-element models of long bones can accurately capture strain response to bending and torsion.

    PubMed

    Varghese, Bino; Short, David; Penmetsa, Ravi; Goswami, Tarun; Hangartner, Thomas

    2011-04-29

    Finite element (FE) models of long bones constructed from computed-tomography (CT) data are emerging as an invaluable tool in the field of bone biomechanics. However, the performance of such FE models is highly dependent on the accurate capture of geometry and appropriate assignment of material properties. In this study, a combined numerical-experimental study is performed comparing FE-predicted surface strains with strain-gauge measurements. Thirty-six major, cadaveric, long bones (humerus, radius, femur and tibia), which cover a wide range of bone sizes, were tested under three-point bending and torsion. The FE models were constructed from trans-axial volumetric CT scans, and the segmented bone images were corrected for partial-volume effects. The material properties (Young's modulus for cortex, density-modulus relationship for trabecular bone and Poisson's ratio) were calibrated by minimizing the error between experiments and simulations among all bones. The R(2) values of the measured strains versus load under three-point bending and torsion were 0.96-0.99 and 0.61-0.99, respectively, for all bones in our dataset. The errors of the calculated FE strains in comparison to those measured using strain gauges in the mechanical tests ranged from -6% to 7% under bending and from -37% to 19% under torsion. The observation of comparatively low errors and high correlations between the FE-predicted strains and the experimental strains, across the various types of bones and loading conditions (bending and torsion), validates our approach to bone segmentation and our choice of material properties.

  19. Accurate modeling and reconstruction of three-dimensional percolating filamentary microstructures from two-dimensional micrographs via dilation-erosion method

    SciTech Connect

    Guo, En-Yu; Chawla, Nikhilesh; Jing, Tao; Torquato, Salvatore; Jiao, Yang

    2014-03-01

    Heterogeneous materials are ubiquitous in nature and synthetic situations and have a wide range of important engineering applications. Accurate modeling and reconstructing three-dimensional (3D) microstructure of topologically complex materials from limited morphological information such as a two-dimensional (2D) micrograph is crucial to the assessment and prediction of effective material properties and performance under extreme conditions. Here, we extend a recently developed dilation–erosion method and employ the Yeong–Torquato stochastic reconstruction procedure to model and generate 3D austenitic–ferritic cast duplex stainless steel microstructure containing percolating filamentary ferrite phase from 2D optical micrographs of the material sample. Specifically, the ferrite phase is dilated to produce a modified target 2D microstructure and the resulting 3D reconstruction is eroded to recover the percolating ferrite filaments. The dilation–erosion reconstruction is compared with the actual 3D microstructure, obtained from serial sectioning (polishing), as well as the standard stochastic reconstructions incorporating topological connectedness information. The fact that the former can achieve the same level of accuracy as the latter suggests that the dilation–erosion procedure is tantamount to incorporating appreciably more topological and geometrical information into the reconstruction while being much more computationally efficient. - Highlights: • Spatial correlation functions used to characterize filamentary ferrite phase • Clustering information assessed from 3D experimental structure via serial sectioning • Stochastic reconstruction used to generate 3D virtual structure 2D micrograph • Dilation–erosion method to improve accuracy of 3D reconstruction.

  20. Accuracy of functional surfaces on comparatively modeled protein structures

    PubMed Central

    Zhao, Jieling; Dundas, Joe; Kachalo, Sema; Ouyang, Zheng; Liang, Jie

    2012-01-01

    Identification and characterization of protein functional surfaces are important for predicting protein function, understanding enzyme mechanism, and docking small compounds to proteins. As the rapid speed of accumulation of protein sequence information far exceeds that of structures, constructing accurate models of protein functional surfaces and identify their key elements become increasingly important. A promising approach is to build comparative models from sequences using known structural templates such as those obtained from structural genome projects. Here we assess how well this approach works in modeling binding surfaces. By systematically building three-dimensional comparative models of proteins using Modeller, we determine how well functional surfaces can be accurately reproduced. We use an alpha shape based pocket algorithm to compute all pockets on the modeled structures, and conduct a large-scale computation of similarity measurements (pocket RMSD and fraction of functional atoms captured) for 26,590 modeled enzyme protein structures. Overall, we find that when the sequence fragment of the binding surfaces has more than 45% identity to that of the tempalte protein, the modeled surfaces have on average an RMSD of 0.5 Å, and contain 48% or more of the binding surface atoms, with nearly all of the important atoms in the signatures of binding pockets captured. PMID:21541664

  1. Track structure in biological models.

    PubMed

    Curtis, S B

    1986-01-01

    High-energy heavy ions in the galactic cosmic radiation (HZE particles) may pose a special risk during long term manned space flights outside the sheltering confines of the earth's geomagnetic field. These particles are highly ionizing, and they and their nuclear secondaries can penetrate many centimeters of body tissue. The three dimensional patterns of ionizations they create as they lose energy are referred to as their track structure. Several models of biological action on mammalian cells attempt to treat track structure or related quantities in their formulation. The methods by which they do this are reviewed. The proximity function is introduced in connection with the theory of Dual Radiation Action (DRA). The ion-gamma kill (IGK) model introduces the radial energy-density distribution, which is a smooth function characterizing both the magnitude and extension of a charged particle track. The lethal, potentially lethal (LPL) model introduces lambda, the mean distance between relevant ion clusters or biochemical species along the track. Since very localized energy depositions (within approximately 10 nm) are emphasized, the proximity function as defined in the DRA model is not of utility in characterizing track structure in the LPL formulation.

  2. Track structure in biological models.

    PubMed

    Curtis, S B

    1986-01-01

    High-energy heavy ions in the galactic cosmic radiation (HZE particles) may pose a special risk during long term manned space flights outside the sheltering confines of the earth's geomagnetic field. These particles are highly ionizing, and they and their nuclear secondaries can penetrate many centimeters of body tissue. The three dimensional patterns of ionizations they create as they lose energy are referred to as their track structure. Several models of biological action on mammalian cells attempt to treat track structure or related quantities in their formulation. The methods by which they do this are reviewed. The proximity function is introduced in connection with the theory of Dual Radiation Action (DRA). The ion-gamma kill (IGK) model introduces the radial energy-density distribution, which is a smooth function characterizing both the magnitude and extension of a charged particle track. The lethal, potentially lethal (LPL) model introduces lambda, the mean distance between relevant ion clusters or biochemical species along the track. Since very localized energy depositions (within approximately 10 nm) are emphasized, the proximity function as defined in the DRA model is not of utility in characterizing track structure in the LPL formulation. PMID:11537218

  3. Accurate segmentation of partially overlapping cervical cells based on dynamic sparse contour searching and GVF snake model.

    PubMed

    Guan, Tao; Zhou, Dongxiang; Liu, Yunhui

    2015-07-01

    Overlapping cells segmentation is one of the challenging topics in medical image processing. In this paper, we propose to approximately represent the cell contour as a set of sparse contour points, which can be further partitioned into two parts: the strong contour points and the weak contour points. We consider the cell contour extraction as a contour points locating problem and propose an effective and robust framework for segmentation of partially overlapping cells in cervical smear images. First, the cell nucleus and the background are extracted by a morphological filtering-based K-means clustering algorithm. Second, a gradient decomposition-based edge enhancement method is developed for enhancing the true edges belonging to the center cell. Then, a dynamic sparse contour searching algorithm is proposed to gradually locate the weak contour points in the cell overlapping regions based on the strong contour points. This algorithm involves the least squares estimation and a dynamic searching principle, and is thus effective to cope with the cell overlapping problem. Using the located contour points, the Gradient Vector Flow Snake model is finally employed to extract the accurate cell contour. Experiments have been performed on two cervical smear image datasets containing both single cells and partially overlapping cells. The high accuracy of the cell contour extraction result validates the effectiveness of the proposed method.

  4. Modelling the Constraints of Spatial Environment in Fauna Movement Simulations: Comparison of a Boundaries Accurate Function and a Cost Function

    NASA Astrophysics Data System (ADS)

    Jolivet, L.; Cohen, M.; Ruas, A.

    2015-08-01

    Landscape influences fauna movement at different levels, from habitat selection to choices of movements' direction. Our goal is to provide a development frame in order to test simulation functions for animal's movement. We describe our approach for such simulations and we compare two types of functions to calculate trajectories. To do so, we first modelled the role of landscape elements to differentiate between elements that facilitate movements and the ones being hindrances. Different influences are identified depending on landscape elements and on animal species. Knowledge were gathered from ecologists, literature and observation datasets. Second, we analysed the description of animal movement recorded with GPS at fine scale, corresponding to high temporal frequency and good location accuracy. Analysing this type of data provides information on the relation between landscape features and movements. We implemented an agent-based simulation approach to calculate potential trajectories constrained by the spatial environment and individual's behaviour. We tested two functions that consider space differently: one function takes into account the geometry and the types of landscape elements and one cost function sums up the spatial surroundings of an individual. Results highlight the fact that the cost function exaggerates the distances travelled by an individual and simplifies movement patterns. The geometry accurate function represents a good bottom-up approach for discovering interesting areas or obstacles for movements.

  5. A Support Vector Machine model for the prediction of proteotypic peptides for accurate mass and time proteomics

    SciTech Connect

    Webb-Robertson, Bobbie-Jo M.; Cannon, William R.; Oehmen, Christopher S.; Shah, Anuj R.; Gurumoorthi, Vidhya; Lipton, Mary S.; Waters, Katrina M.

    2008-07-01

    Motivation: The standard approach to identifying peptides based on accurate mass and elution time (AMT) compares these profiles obtained from a high resolution mass spectrometer to a database of peptides previously identified from tandem mass spectrometry (MS/MS) studies. It would be advantageous, with respect to both accuracy and cost, to only search for those peptides that are detectable by MS (proteotypic). Results: We present a Support Vector Machine (SVM) model that uses a simple descriptor space based on 35 properties of amino acid content, charge, hydrophilicity, and polarity for the quantitative prediction of proteotypic peptides. Using three independently derived AMT databases (Shewanella oneidensis, Salmonella typhimurium, Yersinia pestis) for training and validation within and across species, the SVM resulted in an average accuracy measure of ~0.8 with a standard deviation of less than 0.025. Furthermore, we demonstrate that these results are achievable with a small set of 12 variables and can achieve high proteome coverage. Availability: http://omics.pnl.gov/software/STEPP.php

  6. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation

    PubMed Central

    Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot

    2016-01-01

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which are not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  7. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation

    NASA Astrophysics Data System (ADS)

    Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot

    2016-01-01

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which is not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  8. Model updating of damped structures using FRF data

    NASA Astrophysics Data System (ADS)

    Lin, R. M.; Zhu, J.

    2006-11-01

    Due to the important contribution of damping on structural vibration, model updating of damped structures becomes significant and remains an issue in most model updating methods developed to date. In this paper, the frequency response function(FRF) method, which is one of the most frequently referenced model updating methods, has been further developed to identify damping matrices of structural systems, as well as mass and stiffness matrices. In order to overcome the problem of complexity of measured FRF and modal data, complex updating formulations using FRF data to identify damping coefficients have been established for the cases of proportional damping and general non-proportional damping. To demonstrate the effectiveness of the proposed complex FRF updating method, numerical simulations based on the GARTEUR structure with structural damping have been presented. The updated results have shown that the complex FRF updating method can be used to derive accurate updated mass and stiffness modelling errors and system damping matrices.

  9. Accurate determination of the fine-structure intervals in the 3P ground states of C-13 and C-12 by far-infrared laser magnetic resonance

    NASA Technical Reports Server (NTRS)

    Cooksy, A. L.; Saykally, R. J.; Brown, J. M.; Evenson, K. M.

    1986-01-01

    Accurate values are presented for the fine-structure intervals in the 3P ground state of neutral atomic C-12 and C-13 as obtained from laser magnetic resonance spectroscopy. The rigorous analysis of C-13 hyperfine structure, the measurement of resonant fields for C-12 transitions at several additional far-infrared laser frequencies, and the increased precision of the C-12 measurements, permit significant improvement in the evaluation of these energies relative to earlier work. These results will expedite the direct and precise measurement of these transitions in interstellar sources and should assist in the determination of the interstellar C-12/C-13 abundance ratio.

  10. Functional connectivity and structural covariance between regions of interest can be measured more accurately using multivariate distance correlation.

    PubMed

    Geerligs, Linda; Cam-Can; Henson, Richard N

    2016-07-15

    Studies of brain-wide functional connectivity or structural covariance typically use measures like the Pearson correlation coefficient, applied to data that have been averaged across voxels within regions of interest (ROIs). However, averaging across voxels may result in biased connectivity estimates when there is inhomogeneity within those ROIs, e.g., sub-regions that exhibit different patterns of functional connectivity or structural covariance. Here, we propose a new measure based on "distance correlation"; a test of multivariate dependence of high dimensional vectors, which allows for both linear and non-linear dependencies. We used simulations to show how distance correlation out-performs Pearson correlation in the face of inhomogeneous ROIs. To evaluate this new measure on real data, we use resting-state fMRI scans and T1 structural scans from 2 sessions on each of 214 participants from the Cambridge Centre for Ageing & Neuroscience (Cam-CAN) project. Pearson correlation and distance correlation showed similar average connectivity patterns, for both functional connectivity and structural covariance. Nevertheless, distance correlation was shown to be 1) more reliable across sessions, 2) more similar across participants, and 3) more robust to different sets of ROIs. Moreover, we found that the similarity between functional connectivity and structural covariance estimates was higher for distance correlation compared to Pearson correlation. We also explored the relative effects of different preprocessing options and motion artefacts on functional connectivity. Because distance correlation is easy to implement and fast to compute, it is a promising alternative to Pearson correlations for investigating ROI-based brain-wide connectivity patterns, for functional as well as structural data.

  11. A new expression of Ns versus Ef to an accurate control charge model for AlGaAs/GaAs

    NASA Astrophysics Data System (ADS)

    Bouneb, I.; Kerrour, F.

    2016-03-01

    Semi-conductor components become the privileged support of information and communication, particularly appreciation to the development of the internet. Today, MOS transistors on silicon dominate largely the semi-conductors market, however the diminution of transistors grid length is not enough to enhance the performances and respect Moore law. Particularly, for broadband telecommunications systems, where faster components are required. For this reason, alternative structures proposed like hetero structures IV-IV or III-V [1] have been.The most effective components in this area (High Electron Mobility Transistor: HEMT) on IIIV substrate. This work investigates an approach for contributing to the development of a numerical model based on physical and numerical modelling of the potential at heterostructure in AlGaAs/GaAs interface. We have developed calculation using projective methods allowed the Hamiltonian integration using Green functions in Schrodinger equation, for a rigorous resolution “self coherent” with Poisson equation. A simple analytical approach for charge-control in quantum well region of an AlGaAs/GaAs HEMT structure was presented. A charge-control equation, accounting for a variable average distance of the 2-DEG from the interface was introduced. Our approach which have aim to obtain ns-Vg characteristics is mainly based on: A new linear expression of Fermi-level variation with two-dimensional electron gas density in high electron mobility and also is mainly based on the notion of effective doping and a new expression of AEc

  12. Structure and modeling of turbulence

    SciTech Connect

    Novikov, E.A.

    1995-12-31

    The {open_quotes}vortex strings{close_quotes} scale l{sub s} {approximately} LRe{sup -3/10} (L-external scale, Re - Reynolds number) is suggested as a grid scale for the large-eddy simulation. Various aspects of the structure of turbulence and subgrid modeling are described in terms of conditional averaging, Markov processes with dependent increments and infinitely divisible distributions. The major request from the energy, naval, aerospace and environmental engineering communities to the theory of turbulence is to reduce the enormous number of degrees of freedom in turbulent flows to a level manageable by computer simulations. The vast majority of these degrees of freedom is in the small-scale motion. The study of the structure of turbulence provides a basis for subgrid-scale (SGS) models, which are necessary for the large-eddy simulations (LES).

  13. Extending the molecular size in accurate quantum-chemical calculations: the equilibrium structure and spectroscopic properties of uracil.

    PubMed

    Puzzarini, Cristina; Barone, Vincenzo

    2011-04-21

    The equilibrium structure of uracil has been investigated using both theoretical and experimental data. With respect to the former, quantum-chemical calculations at the coupled-cluster level in conjunction with a triple-zeta basis set have been carried out. Extrapolation to the basis set limit, performed employing the second-order Møller-Plesset perturbation theory, and inclusion of core-correlation and diffuse-function corrections have also been considered. Based on the available rotational constants for various isotopic species together with corresponding computed vibrational corrections, the semi-experimental equilibrium structure of uracil has been determined for the first time. Theoretical and semi-experimental structures have been found in remarkably good agreement, thus pointing out the limitations of previous experimental determinations. Molecular and spectroscopic properties of uracil have then been studied by means of the composite computational approach introduced for the molecular structure evaluation. Among the results achieved, we mention the revision of the dipole moment. On the whole, it has been proved that the computational procedure presented is able to provide parameters with the proper accuracy to support experimental investigations of large molecules of biological interest.

  14. Gap between technically accurate information and socially appropriate information for structural health monitoring system installed into tall buildings

    NASA Astrophysics Data System (ADS)

    Mita, Akira

    2016-04-01

    The importance of the structural health monitoring system for tall buildings is now widely recognized by at least structural engineers and managers at large real estate companies to ensure the structural safety immediately after a large earthquake and appeal the quantitative safety of buildings to potential tenants. Some leading real estate companies decided to install the system into all tall buildings. Considering this tendency, a pilot project for the west area of Shinjuku Station supported by the Japan Science and Technology Agency was started by the author team to explore a possibility of using the system to provide safe spaces for commuters and residents. The system was installed into six tall buildings. From our experience, it turned out that viewing only from technological aspects was not sufficient for the system to be accepted and to be really useful. Safe spaces require not only the structural safety but also the soundness of key functions of the building. We need help from social scientists, medical doctors, city planners etc. to further improve the integrity of the system.

  15. Accurate protein structure annotation through competitive diffusion of enzymatic functions over a network of local evolutionary similarities.

    PubMed

    Venner, Eric; Lisewski, Andreas Martin; Erdin, Serkan; Ward, R Matthew; Amin, Shivas R; Lichtarge, Olivier

    2010-01-01

    High-throughput Structural Genomics yields many new protein structures without known molecular function. This study aims to uncover these missing annotations by globally comparing select functional residues across the structural proteome. First, Evolutionary Trace Annotation, or ETA, identifies which proteins have local evolutionary and structural features in common; next, these proteins are linked together into a proteomic network of ETA similarities; then, starting from proteins with known functions, competing functional labels diffuse link-by-link over the entire network. Every node is thus assigned a likelihood z-score for every function, and the most significant one at each node wins and defines its annotation. In high-throughput controls, this competitive diffusion process recovered enzyme activity annotations with 99% and 97% accuracy at half-coverage for the third and fourth Enzyme Commission (EC) levels, respectively. This corresponds to false positive rates 4-fold lower than nearest-neighbor and 5-fold lower than sequence-based annotations. In practice, experimental validation of the predicted carboxylesterase activity in a protein from Staphylococcus aureus illustrated the effectiveness of this approach in the context of an increasingly drug-resistant microbe. This study further links molecular function to a small number of evolutionarily important residues recognizable by Evolutionary Tracing and it points to the specificity and sensitivity of functional annotation by competitive global network diffusion. A web server is at http://mammoth.bcm.tmc.edu/networks.

  16. A model for chromatin structure.

    PubMed Central

    Li, H J

    1975-01-01

    A model for chromatin structure is presented. (a) Each of four histone species, H2A (IIbl or f2a2), H2B (IIb2 or f2b), H3 (III or f3) and H4 (IV or f2al) can form a parallel dimer. (b) These dimers can form two tetramers, (H2A)2(H2b)2 and (H3)2(H4)2. (C) These two tetramers bind a segment of DNA and condense it into a "C" segments. (d) The adjacent segments, termed extended or "E" segments, are bound by histone H1 (I or fl) for the major fraction of chromatin; the other "E" regions can be either bound by non-histone proteins or free of protein binding. (e) The binding of histones causes a structural distortion of the DNA which, depending upon the external conditions, may generate the formation of either an open structure with a heterogeneous and non-uniform supercoil or a compact structure with a string of beads. The model is supported by experimental data on histone-histone interaction, histone-DNA interaction and histone subunit-DNA interaction. PMID:1101222

  17. Efficient Calculation of Accurate Reaction Energies-Assessment of Different Models in Electronic Structure Theory.

    PubMed

    Friedrich, Joachim

    2015-08-11

    In this work we analyze the accuracy and the efficiency of different schemes to obtain the complete basis set limit for CCSD(T). It is found that composite schemes using an MP2 increment to reach the basis set limit provide high accuracy combined with high efficiency. In these composite schemes the MP2-F12/cc-pVTZ-F12 method is suitable to compute the MP2 contribution at the basis set limit. We propose to use the def2-TZVP or the TZVPP basis sets at the coupled cluster level in combination with the cc-pVTZ-F12 basis set at the MP2 level to compute reaction energies close to the basis set limit, if high accuracy methods like CCSD(T)(F12*) or 56-extrapolations are no longer feasible due to the computational effort. The standard deviation of CCSD(T)+ΔMP2/cc-pVTZ-F12/def2-TZVP and CCSD(T)+ΔMP2/cc-pVTZ-F12/TZVPP is found to be only 0.93 and 0.65 kJ/mol for a test set of 51 closed shell reactions. Furthermore, we provide a comprehensive list of different computational strategies to obtain CCSD(T) reaction energies with an efficiency and accuracy measure. Finally we analyze how different choices of the exponent in the correlation factor (γ) change the results when using explicitly correlated methods. The statistical results in this study are based on a set of 51 reaction energies in the range of 0.7 to 631.5 kJ/mol.

  18. A highly accurate protein structural class prediction approach using auto cross covariance transformation and recursive feature elimination.

    PubMed

    Li, Xiaowei; Liu, Taigang; Tao, Peiying; Wang, Chunhua; Chen, Lanming

    2015-12-01

    Structural class characterizes the overall folding type of a protein or its domain. Many methods have been proposed to improve the prediction accuracy of protein structural class in recent years, but it is still a challenge for the low-similarity sequences. In this study, we introduce a feature extraction technique based on auto cross covariance (ACC) transformation of position-specific score matrix (PSSM) to represent a protein sequence. Then support vector machine-recursive feature elimination (SVM-RFE) is adopted to select top K features according to their importance and these features are input to a support vector machine (SVM) to conduct the prediction. Performance evaluation of the proposed method is performed using the jackknife test on three low-similarity datasets, i.e., D640, 1189 and 25PDB. By means of this method, the overall accuracies of 97.2%, 96.2%, and 93.3% are achieved on these three datasets, which are higher than those of most existing methods. This suggests that the proposed method could serve as a very cost-effective tool for predicting protein structural class especially for low-similarity datasets.

  19. Snow Micro-Structure Model

    2014-06-25

    PIKA is a MOOSE-based application for modeling micro-structure evolution of seasonal snow. The model will be useful for environmental, atmospheric, and climate scientists. Possible applications include application to energy balance models, ice sheet modeling, and avalanche forecasting. The model implements physics from published, peer-reviewed articles. The main purpose is to foster university and laboratory collaboration to build a larger multi-scale snow model using MOOSE. The main feature of the code is that it is implementedmore » using the MOOSE framework, thus making features such as multiphysics coupling, adaptive mesh refinement, and parallel scalability native to the application. PIKA implements three equations: the phase-field equation for tracking the evolution of the ice-air interface within seasonal snow at the grain-scale; the heat equation for computing the temperature of both the ice and air within the snow; and the mass transport equation for monitoring the diffusion of water vapor in the pore space of the snow.« less

  20. Snow Micro-Structure Model

    SciTech Connect

    Micah Johnson, Andrew Slaughter

    2014-06-25

    PIKA is a MOOSE-based application for modeling micro-structure evolution of seasonal snow. The model will be useful for environmental, atmospheric, and climate scientists. Possible applications include application to energy balance models, ice sheet modeling, and avalanche forecasting. The model implements physics from published, peer-reviewed articles. The main purpose is to foster university and laboratory collaboration to build a larger multi-scale snow model using MOOSE. The main feature of the code is that it is implemented using the MOOSE framework, thus making features such as multiphysics coupling, adaptive mesh refinement, and parallel scalability native to the application. PIKA implements three equations: the phase-field equation for tracking the evolution of the ice-air interface within seasonal snow at the grain-scale; the heat equation for computing the temperature of both the ice and air within the snow; and the mass transport equation for monitoring the diffusion of water vapor in the pore space of the snow.

  1. Accurate and Fully Automatic Hippocampus Segmentation Using Subject-Specific 3D Optimal Local Maps Into a Hybrid Active Contour Model

    PubMed Central

    Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-01-01

    Assessing the structural integrity of the hippocampus (HC) is an essential step toward prevention, diagnosis, and follow-up of various brain disorders due to the implication of the structural changes of the HC in those disorders. In this respect, the development of automatic segmentation methods that can accurately, reliably, and reproducibly segment the HC has attracted considerable attention over the past decades. This paper presents an innovative 3-D fully automatic method to be used on top of the multiatlas concept for the HC segmentation. The method is based on a subject-specific set of 3-D optimal local maps (OLMs) that locally control the influence of each energy term of a hybrid active contour model (ACM). The complete set of the OLMs for a set of training images is defined simultaneously via an optimization scheme. At the same time, the optimal ACM parameters are also calculated. Therefore, heuristic parameter fine-tuning is not required. Training OLMs are subsequently combined, by applying an extended multiatlas concept, to produce the OLMs that are anatomically more suitable to the test image. The proposed algorithm was tested on three different and publicly available data sets. Its accuracy was compared with that of state-of-the-art methods demonstrating the efficacy and robustness of the proposed method. PMID:27170866

  2. A new methodology for non-contact accurate crack width measurement through photogrammetry for automated structural safety evaluation

    NASA Astrophysics Data System (ADS)

    Jahanshahi, Mohammad R.; Masri, Sami F.

    2013-03-01

    In mechanical, aerospace and civil structures, cracks are important defects that can cause catastrophes if neglected. Visual inspection is currently the predominant method for crack assessment. This approach is tedious, labor-intensive, subjective and highly qualitative. An inexpensive alternative to current monitoring methods is to use a robotic system that could perform autonomous crack detection and quantification. To reach this goal, several image-based crack detection approaches have been developed; however, the crack thickness quantification, which is an essential element for a reliable structural condition assessment, has not been sufficiently investigated. In this paper, a new contact-less crack quantification methodology, based on computer vision and image processing concepts, is introduced and evaluated against a crack quantification approach which was previously developed by the authors. The proposed approach in this study utilizes depth perception to quantify crack thickness and, as opposed to most previous studies, needs no scale attachment to the region under inspection, which makes this approach ideal for incorporation with autonomous or semi-autonomous mobile inspection systems. Validation tests are performed to evaluate the performance of the proposed approach, and the results show that the new proposed approach outperforms the previously developed one.

  3. One dimensional P wave velocity structure of the crust beneath west Java and accurate hypocentre locations from local earthquake inversion

    SciTech Connect

    Supardiyono; Santosa, Bagus Jaya

    2012-06-20

    A one-dimensional (1-D) velocity model and station corrections for the West Java zone were computed by inverting P-wave arrival times recorded on a local seismic network of 14 stations. A total of 61 local events with a minimum of 6 P-phases, rms 0.56 s and a maximum gap of 299 Degree-Sign were selected. Comparison with previous earthquake locations shows an improvement for the relocated earthquakes. Tests were carried out to verify the robustness of inversion results in order to corroborate the conclusions drawn out from our reasearch. The obtained minimum 1-D velocity model can be used to improve routine earthquake locations and represents a further step toward more detailed seismotectonic studies in this area of West Java.

  4. Reduced Order Models for Fluid-Structure Interaction Phenomena

    NASA Astrophysics Data System (ADS)

    Gallardo, Daniele

    With the advent of active flow control devices for regulating the structural responses of systems involving fluid-structure interaction phenomena, there is a growing need of efficient models that can be used to control the system. The first step is then to be able to model the system in an efficient way based on reduced-order models. This is needed so that accurate predictions of the system evolution could be performed in a fast manner, ideally in real time. However, existing reduced-order models of fluid-structure interaction phenomena that provide closed-form solutions are applicable to only a limited set of scenarios while for real applications high-fidelity experiments or numerical simulations are required, which are unsuitable as efficient or reduced-order models. This thesis proposes a novel reduced-order and efficient model for fluid-structure interaction phenomena. The model structure employed is such that it is generic for different fluid-structure interaction problems. Based on this structure, the model is first built for a given fluid-structure interaction problem based on a database generated through high-fidelity numerical simulations while it can subsequently be used to predict the structural response over a wide set of flow conditions for the fluid-structure interaction problem at hand. The model is tested on two cases: a cylinder suspended in a low Reynolds number flow that includes the lock-in region and an airfoil subjected to plunge oscillations in a high Reynolds number regime. For each case, in addition to training profile we also present validation profiles that are used to determine the performance of the reduced-order model. The reduced-order model devised in this study proved to be an effective and efficient modeling method for fluid-structure interaction phenomena and it shown its applicability in very different kind of scenarios.

  5. Improved Structure Factors for Modeling XRTS Experiments

    NASA Astrophysics Data System (ADS)

    Stanton, Liam; Murillo, Michael; Benage, John; Graziani, Frank

    2012-10-01

    Characterizing warm dense matter (WDM) has gained renewed interest due to advances in powerful lasers and next generation light sources. Because WDM is strongly coupled and moderately degenerate, we must often rely on simulations, which are necessarily based on ions interacting through a screened potential that must be determined. Given such a potential, ionic radial distribution functions (RDFs) and structure factors (SFs) can be calculated and related to XRTS data and EOS quantities. While many screening models are available, such as the Debye- (Yukawa-) potential, they are known to over-screen and are unable capture accurate bound state effects, which have been shown to contribute to both scattering data from XRTS as well as the short-range repulsion in the RDF. Here, we present a model which incorporates an improvement to the screening length in addition to a consistent treatment of the core electrons. This new potential improves the accuracy of both bound state and screening effects without contributing to the computational complexity of Debye-like models. Calculations of ionic RDFs and SFs are compared to experimental data and quantum molecular dynamics simulations for Be, Na, Mg and Al in the WDM and liquid metal regime.

  6. Accurate extraction of Lagrangian coherent structures over finite domains with application to flight data analysis over Hong Kong International Airport.

    PubMed

    Tang, Wenbo; Chan, Pak Wai; Haller, George

    2010-03-01

    Locating Lagrangian coherent structures (LCS) for dynamical systems defined on a spatially limited domain present a challenge because trajectory integration must be stopped at the boundary for lack of further velocity data. This effectively turns the domain boundary into an attractor, introduces edge effects resulting in spurious ridges in the associated finite-time Lyapunov exponent (FTLE) field, and causes some of the real ridges of the FTLE field to be suppressed by strong spurious ridges. To address these issues, we develop a finite-domain FTLE method that renders LCS with an accuracy and fidelity that is suitable for automated feature detection. We show the application of this technique to the analysis of velocity data from aircraft landing at the Hong Kong International Airport.

  7. A mathematical recursive model for accurate description of the phase behavior in the near-critical region by Generalized van der Waals Equation

    NASA Astrophysics Data System (ADS)

    Kim, Jibeom; Jeon, Joonhyeon

    2015-01-01

    Recently, related studies on Equation Of State (EOS) have reported that generalized van der Waals (GvdW) shows poor representations in the near critical region for non-polar and non-sphere molecules. Hence, there are still remains a problem of GvdW parameters to minimize loss in describing saturated vapor densities and vice versa. This paper describes a recursive model GvdW (rGvdW) for an accurate representation of pure fluid materials in the near critical region. For the performance evaluation of rGvdW in the near critical region, other EOS models are also applied together with two pure molecule group: alkane and amine. The comparison results show rGvdW provides much more accurate and reliable predictions of pressure than the others. The calculating model of EOS through this approach gives an additional insight into the physical significance of accurate prediction of pressure in the nearcritical region.

  8. An accurately preorganized IRES RNA structure enables eIF4G capture for initiation of viral translation.

    PubMed

    Imai, Shunsuke; Kumar, Parimal; Hellen, Christopher U T; D'Souza, Victoria M; Wagner, Gerhard

    2016-09-01

    Many viruses bypass canonical cap-dependent translation in host cells by using internal ribosomal entry sites (IRESs) in their transcripts; IRESs hijack initiation factors for the assembly of initiation complexes. However, it is currently unknown how IRES RNAs recognize initiation factors that have no endogenous RNA binding partners; in a prominent example, the IRES of encephalomyocarditis virus (EMCV) interacts with the HEAT-1 domain of eukaryotic initiation factor 4G (eIF4G). Here we report the solution structure of the J-K region of this IRES and show that its stems are precisely organized to position protein-recognition bulges. This multisite interaction mechanism operates on an all-or-nothing principle in which all domains are required. This preorganization is accomplished by an 'adjuster module': a pentaloop motif that acts as a dual-sided docking station for base-pair receptors. Because subtle changes in the orientation abrogate protein capture, our study highlights how a viral RNA acquires affinity for a target protein. PMID:27525590

  9. Towards accurate solvation dynamics of divalent cations in water using the polarizable amoeba force field: From energetics to structure.

    PubMed

    Piquemal, Jean-Philip; Perera, Lalith; Cisneros, G Andrés; Ren, Pengyu; Pedersen, Lee G; Darden, Thomas A

    2006-08-01

    Molecular dynamics simulations were performed using a modified amoeba force field to determine hydration and dynamical properties of the divalent cations Ca2+ and Mg2+. The extension of amoeba to divalent cations required the introduction of a cation specific parametrization. To accomplish this, the Thole polarization damping model parametrization was modified based on the ab initio polarization energy computed by a constrained space orbital variation energy decomposition scheme. Excellent agreement has been found with condensed phase experimental results using parameters derived from gas phase ab initio calculations. Additionally, we have observed that the coordination of the calcium cation is influenced by the size of the periodic water box, a recurrent issue in first principles molecular dynamics studies.

  10. Model Comparison of Bayesian Semiparametric and Parametric Structural Equation Models

    ERIC Educational Resources Information Center

    Song, Xin-Yuan; Xia, Ye-Mao; Pan, Jun-Hao; Lee, Sik-Yum

    2011-01-01

    Structural equation models have wide applications. One of the most important issues in analyzing structural equation models is model comparison. This article proposes a Bayesian model comparison statistic, namely the "L[subscript nu]"-measure for both semiparametric and parametric structural equation models. For illustration purposes, we consider…

  11. Accurate small and wide angle x-ray scattering profiles from atomic models of proteins and nucleic acids

    SciTech Connect

    Nguyen, Hung T.; Pabit, Suzette A.; Meisburger, Steve P.; Pollack, Lois; Case, David A.

    2014-12-14

    A new method is introduced to compute X-ray solution scattering profiles from atomic models of macromolecules. The three-dimensional version of the Reference Interaction Site Model (RISM) from liquid-state statistical mechanics is employed to compute the solvent distribution around the solute, including both water and ions. X-ray scattering profiles are computed from this distribution together with the solute geometry. We describe an efficient procedure for performing this calculation employing a Lebedev grid for the angular averaging. The intensity profiles (which involve no adjustable parameters) match experiment and molecular dynamics simulations up to wide angle for two proteins (lysozyme and myoglobin) in water, as well as the small-angle profiles for a dozen biomolecules taken from the BioIsis.net database. The RISM model is especially well-suited for studies of nucleic acids in salt solution. Use of fiber-diffraction models for the structure of duplex DNA in solution yields close agreement with the observed scattering profiles in both the small and wide angle scattering (SAXS and WAXS) regimes. In addition, computed profiles of anomalous SAXS signals (for Rb{sup +} and Sr{sup 2+}) emphasize the ionic contribution to scattering and are in reasonable agreement with experiment. In cases where an absolute calibration of the experimental data at q = 0 is available, one can extract a count of the excess number of waters and ions; computed values depend on the closure that is assumed in the solution of the Ornstein–Zernike equations, with results from the Kovalenko–Hirata closure being closest to experiment for the cases studied here.

  12. Accurate small and wide angle x-ray scattering profiles from atomic models of proteins and nucleic acids.

    PubMed

    Nguyen, Hung T; Pabit, Suzette A; Meisburger, Steve P; Pollack, Lois; Case, David A

    2014-12-14

    A new method is introduced to compute X-ray solution scattering profiles from atomic models of macromolecules. The three-dimensional version of the Reference Interaction Site Model (RISM) from liquid-state statistical mechanics is employed to compute the solvent distribution around the solute, including both water and ions. X-ray scattering profiles are computed from this distribution together with the solute geometry. We describe an efficient procedure for performing this calculation employing a Lebedev grid for the angular averaging. The intensity profiles (which involve no adjustable parameters) match experiment and molecular dynamics simulations up to wide angle for two proteins (lysozyme and myoglobin) in water, as well as the small-angle profiles for a dozen biomolecules taken from the BioIsis.net database. The RISM model is especially well-suited for studies of nucleic acids in salt solution. Use of fiber-diffraction models for the structure of duplex DNA in solution yields close agreement with the observed scattering profiles in both the small and wide angle scattering (SAXS and WAXS) regimes. In addition, computed profiles of anomalous SAXS signals (for Rb(+) and Sr(2+)) emphasize the ionic contribution to scattering and are in reasonable agreement with experiment. In cases where an absolute calibration of the experimental data at q = 0 is available, one can extract a count of the excess number of waters and ions; computed values depend on the closure that is assumed in the solution of the Ornstein-Zernike equations, with results from the Kovalenko-Hirata closure being closest to experiment for the cases studied here.

  13. Accurate small and wide angle x-ray scattering profiles from atomic models of proteins and nucleic acids

    NASA Astrophysics Data System (ADS)

    Nguyen, Hung T.; Pabit, Suzette A.; Meisburger, Steve P.; Pollack, Lois; Case, David A.

    2014-12-01

    A new method is introduced to compute X-ray solution scattering profiles from atomic models of macromolecules. The three-dimensional version of the Reference Interaction Site Model (RISM) from liquid-state statistical mechanics is employed to compute the solvent distribution around the solute, including both water and ions. X-ray scattering profiles are computed from this distribution together with the solute geometry. We describe an efficient procedure for performing this calculation employing a Lebedev grid for the angular averaging. The intensity profiles (which involve no adjustable parameters) match experiment and molecular dynamics simulations up to wide angle for two proteins (lysozyme and myoglobin) in water, as well as the small-angle profiles for a dozen biomolecules taken from the BioIsis.net database. The RISM model is especially well-suited for studies of nucleic acids in salt solution. Use of fiber-diffraction models for the structure of duplex DNA in solution yields close agreement with the observed scattering profiles in both the small and wide angle scattering (SAXS and WAXS) regimes. In addition, computed profiles of anomalous SAXS signals (for Rb+ and Sr2+) emphasize the ionic contribution to scattering and are in reasonable agreement with experiment. In cases where an absolute calibration of the experimental data at q = 0 is available, one can extract a count of the excess number of waters and ions; computed values depend on the closure that is assumed in the solution of the Ornstein-Zernike equations, with results from the Kovalenko-Hirata closure being closest to experiment for the cases studied here.

  14. CISNET lung models: Comparison of model assumptions and model structures

    PubMed Central

    McMahon, Pamela M.; Hazelton, William; Kimmel, Marek; Clarke, Lauren

    2012-01-01

    Sophisticated modeling techniques can be powerful tools to help us understand the effects of cancer control interventions on population trends in cancer incidence and mortality. Readers of journal articles are however rarely supplied with modeling details. Six modeling groups collaborated as part of the National Cancer Institute’s Cancer Intervention and Surveillance Modeling Network (CISNET) to investigate the contribution of US tobacco control efforts towards reducing lung cancer deaths over the period 1975 to 2000. The models included in this monograph were developed independently and use distinct, complementary approaches towards modeling the natural history of lung cancer. The models used the same data for inputs and agreed on the design of the analysis and the outcome measures. This article highlights aspects of the models that are most relevant to similarities of or differences between the results. Structured comparisons can increase the transparency of these complex models. PMID:22882887

  15. Adapting Data Processing To Compare Model and Experiment Accurately: A Discrete Element Model and Magnetic Resonance Measurements of a 3D Cylindrical Fluidized Bed.

    PubMed

    Boyce, Christopher M; Holland, Daniel J; Scott, Stuart A; Dennis, John S

    2013-12-18

    Discrete element modeling is being used increasingly to simulate flow in fluidized beds. These models require complex measurement techniques to provide validation for the approximations inherent in the model. This paper introduces the idea of modeling the experiment to ensure that the validation is accurate. Specifically, a 3D, cylindrical gas-fluidized bed was simulated using a discrete element model (DEM) for particle motion coupled with computational fluid dynamics (CFD) to describe the flow of gas. The results for time-averaged, axial velocity during bubbling fluidization were compared with those from magnetic resonance (MR) experiments made on the bed. The DEM-CFD data were postprocessed with various methods to produce time-averaged velocity maps for comparison with the MR results, including a method which closely matched the pulse sequence and data processing procedure used in the MR experiments. The DEM-CFD results processed with the MR-type time-averaging closely matched experimental MR results, validating the DEM-CFD model. Analysis of different averaging procedures confirmed that MR time-averages of dynamic systems correspond to particle-weighted averaging, rather than frame-weighted averaging, and also demonstrated that the use of Gaussian slices in MR imaging of dynamic systems is valid. PMID:24478537

  16. Adapting Data Processing To Compare Model and Experiment Accurately: A Discrete Element Model and Magnetic Resonance Measurements of a 3D Cylindrical Fluidized Bed

    PubMed Central

    2013-01-01

    Discrete element modeling is being used increasingly to simulate flow in fluidized beds. These models require complex measurement techniques to provide validation for the approximations inherent in the model. This paper introduces the idea of modeling the experiment to ensure that the validation is accurate. Specifically, a 3D, cylindrical gas-fluidized bed was simulated using a discrete element model (DEM) for particle motion coupled with computational fluid dynamics (CFD) to describe the flow of gas. The results for time-averaged, axial velocity during bubbling fluidization were compared with those from magnetic resonance (MR) experiments made on the bed. The DEM-CFD data were postprocessed with various methods to produce time-averaged velocity maps for comparison with the MR results, including a method which closely matched the pulse sequence and data processing procedure used in the MR experiments. The DEM-CFD results processed with the MR-type time-averaging closely matched experimental MR results, validating the DEM-CFD model. Analysis of different averaging procedures confirmed that MR time-averages of dynamic systems correspond to particle-weighted averaging, rather than frame-weighted averaging, and also demonstrated that the use of Gaussian slices in MR imaging of dynamic systems is valid. PMID:24478537

  17. Charge Central Interpretation of the Full Nonlinear PB Equation: Implications for Accurate and Scalable Modeling of Solvation Interactions.

    PubMed

    Xiao, Li; Wang, Changhao; Ye, Xiang; Luo, Ray

    2016-08-25

    Continuum solvation modeling based upon the Poisson-Boltzmann equation (PBE) is widely used in structural and functional analysis of biomolecules. In this work, we propose a charge-central interpretation of the full nonlinear PBE electrostatic interactions. The validity of the charge-central view or simply charge view, as formulated as a vacuum Poisson equation with effective charges, was first demonstrated by reproducing both electrostatic potentials and energies from the original solvated full nonlinear PBE. There are at least two benefits when the charge-central framework is applied. First the convergence analyses show that the use of polarization charges allows a much faster converging numerical procedure for electrostatic energy and forces calculation for the full nonlinear PBE. Second, the formulation of the solvated electrostatic interactions as effective charges in vacuum allows scalable algorithms to be deployed for large biomolecular systems. Here, we exploited the charge-view interpretation and developed a particle-particle particle-mesh (P3M) strategy for the full nonlinear PBE systems. We also studied the accuracy and convergence of solvation forces with the charge-view and the P3M methods. It is interesting to note that the convergence of both the charge-view and the P3M methods is more rapid than the original full nonlinear PBE method. Given the developments and validations documented here, we are working to adapt the P3M treatment of the full nonlinear PBE model to molecular dynamics simulations.

  18. Accurate Spectral Fits of Jupiter's Great Red Spot: VIMS Visual Spectra Modelled with Chromophores Created by Photolyzed Ammonia Reacting with Acetyleneχ±

    NASA Astrophysics Data System (ADS)

    Baines, Kevin; Sromovsky, Lawrence A.; Fry, Patrick M.; Carlson, Robert W.; Momary, Thomas W.

    2016-10-01

    We report results incorporating the red-tinted photochemically-generated aerosols of Carlson et al (2016, Icarus 274, 106-115) in spectral models of Jupiter's Great Red Spot (GRS). Spectral models of the 0.35-1.0-micron spectrum show good agreement with Cassini/VIMS near-center-meridian and near-limb GRS spectra for model morphologies incorporating an optically-thin layer of Carlson (2016) aerosols at high altitudes, either at the top of the tropospheric GRS cloud, or in a distinct stratospheric haze layer. Specifically, a two-layer "crème brûlée" structure of the Mie-scattering Carlson et al (2016) chromophore attached to the top of a conservatively scattering (hereafter, "white") optically-thick cloud fits the spectra well. Currently, best agreement (reduced χ2 of 0.89 for the central-meridian spectrum) is found for a 0.195-0.217-bar, 0.19 ± 0.02 opacity layer of chromophores with mean particle radius of 0.14 ± 0.01 micron. As well, a structure with a detached stratospheric chromophore layer ~0.25 bar above a white tropospheric GRS cloud provides a good spectral match (reduced χ2 of 1.16). Alternatively, a cloud morphology with the chromophore coating white particles in a single optically- and physically-thick cloud (the "coated-shell model", initially explored by Carlson et al 2016) was found to give significantly inferior fits (best reduced χ2 of 2.9). Overall, we find that models accurately fit the GRS spectrum if (1) most of the optical depth of the chromophore is in a layer near the top of the main cloud or in a distinct separated layer above it, but is not uniformly distributed within the main cloud, (2) the chromophore consists of relatively small, 0.1-0.2-micron-radius particles, and (3) the chromophore layer optical depth is small, ~ 0.1-0.2. Thus, our analysis supports the exogenic origin of the red chromophore consistent with the Carlson et al (2016) photolytic production mechanism rather than an endogenic origin, such as upwelling of material

  19. Effective field model of roughness in magnetic nano-structures

    SciTech Connect

    Lepadatu, Serban

    2015-12-28

    An effective field model is introduced here within the micromagnetics formulation, to study roughness in magnetic structures, by considering sub-exchange length roughness levels as a perturbation on a smooth structure. This allows the roughness contribution to be separated, which is found to give rise to an effective configurational anisotropy for both edge and surface roughness, and accurately model its effects with fine control over the roughness depth without the explicit need to refine the computational cell size to accommodate the roughness profile. The model is validated by comparisons with directly roughened structures for a series of magnetization switching and domain wall velocity simulations and found to be in excellent agreement for roughness levels up to the exchange length. The model is further applied to vortex domain wall velocity simulations with surface roughness, which is shown to significantly modify domain wall movement and result in dynamic pinning and stochastic creep effects.

  20. Interactive Modelling of Molecular Structures

    NASA Astrophysics Data System (ADS)

    Rustad, J. R.; Kreylos, O.; Hamann, B.

    2004-12-01

    The "Nanotech Construction Kit" (NCK) [1] is a new project aimed at improving the understanding of molecular structures at a nanometer-scale level by visualization and interactive manipulation. Our very first prototype is a virtual-reality program allowing the construction of silica and carbon structures from scratch by assembling them one atom at a time. In silica crystals or glasses, the basic building block is an SiO4 unit, with the four oxygen atoms arranged around the central silicon atom in the shape of a regular tetrahedron. Two silicate units can connect to each other by their silicon atoms covalently bonding to one shared oxygen atom. Geometrically, this means that two tetrahedra can link at their vertices. Our program is based on geometric representations and uses simple force fields to simulate the interaction of building blocks, such as forming/breaking of bonds and repulsion. Together with stereoscopic visualization and direct manipulation of building blocks using wands or data gloves, this enables users to create realistic and complex molecular models in short amounts of time. The NCK can either be used as a standalone tool, to analyze or experiment with molecular structures, or it can be used in combination with "traditional" molecular dynamics (MD) simulations. In a first step, the NCK can create initial configurations for subsequent MD simulation. In a more evolved setup, the NCK can serve as a visual front-end for an ongoing MD simulation, visualizing changes in simulation state in real time. Additionally, the NCK can be used to change simulation state on-the-fly, to experiment with different simulation conditions, or force certain events, e.g., the forming of a bond, and observe the simulation's reaction. [1] http://graphics.cs.ucdavis.edu/~okreylos/ResDev/NanoTech

  1. Structural model uncertainty in stochastic simulation

    SciTech Connect

    McKay, M.D.; Morrison, J.D.

    1997-09-01

    Prediction uncertainty in stochastic simulation models can be described by a hierarchy of components: stochastic variability at the lowest level, input and parameter uncertainty at a higher level, and structural model uncertainty at the top. It is argued that a usual paradigm for analysis of input uncertainty is not suitable for application to structural model uncertainty. An approach more likely to produce an acceptable methodology for analyzing structural model uncertainty is one that uses characteristics specific to the particular family of models.

  2. An accurate binding interaction model in de novo computational protein design of interactions: if you build it, they will bind.

    PubMed

    London, Nir; Ambroggio, Xavier

    2014-02-01

    Computational protein design efforts aim to create novel proteins and functions in an automated manner and, in the process, these efforts shed light on the factors shaping natural proteins. The focus of these efforts has progressed from the interior of proteins to their surface and the design of functions, such as binding or catalysis. Here we examine progress in the development of robust methods for the computational design of non-natural interactions between proteins and molecular targets such as other proteins or small molecules. This problem is referred to as the de novo computational design of interactions. Recent successful efforts in de novo enzyme design and the de novo design of protein-protein interactions open a path towards solving this problem. We examine the common themes in these efforts, and review recent studies aimed at understanding the nature of successes and failures in the de novo computational design of interactions. While several approaches culminated in success, the use of a well-defined structural model for a specific binding interaction in particular has emerged as a key strategy for a successful design, and is therefore reviewed with special consideration.

  3. The Specific Analysis of Structural Equation Models

    ERIC Educational Resources Information Center

    McDonald, Roderick P.

    2004-01-01

    Conventional structural equation modeling fits a covariance structure implied by the equations of the model. This treatment of the model often gives misleading results because overall goodness of fit tests do not focus on the specific constraints implied by the model. An alternative treatment arising from Pearl's directed acyclic graph theory…

  4. The Combination of Laser Scanning and Structure from Motion Technology for Creation of Accurate Exterior and Interior Orthophotos of ST. Nicholas Baroque Church

    NASA Astrophysics Data System (ADS)

    Koska, B.; Křemen, T.

    2013-02-01

    Terrestrial laser scanning technology is used for creation of building documentation and 3D building model from its emerging at the turn of the millennium. Photogrammetry has even longer tradition in this field. Both technologies have some technical limitations if they are used for creation of a façade or even an interior orthophoto, but combination of both technologies seems profitable. Laser scanning can be used for creation of an accurate 3D model and photogrammetry for consequent application of high quality colour information. Both technologies were used in synergy to create the building plans, 2D drawing documentation of facades and interior views and the orthophotos of St. Nicholas Baroque church in Prague. The case study is described in details in the paper.

  5. Structural difficulty index: a reliable measure for modelability of protein tertiary structures.

    PubMed

    Kaushik, Rahul; Jayaram, B

    2016-09-01

    The success in protein tertiary-structure prediction is considered to be a function of coverage and similarity/identity of their sequences with suitable templates in the structural databases. However, this measure of modelability of a protein sequence into its structure may be misleading. Addressing this limitation, we propose here a 'structural difficulty (SD)' index, which is derived from secondary structures, homology and physicochemical features of protein sequences. The SD index reflects the capability of predicting accurate structures and helps to assess the potential for developing proteome level structural databases for various organisms with some of the best methodologies available currently. For instance, the plausibility of populating the structural database of human proteome with reliable quality structures under 3 Å root mean square deviation from the corresponding natives is found to be ∼37% of a total of 11 084 manually curated soluble proteins and ∼64% for all annotated and reviewed unique soluble protein (344 661 sequences) of UniProtKB. Also for 77 human pathogenic viruses comprising 2365 globular viral proteins out of which only 162 structures are solved experimentally, SD index scores 1336 proteins in the modelable zone. Availability of reliable protein structures may prove a crucial aid in developing species-wise structural proteomic databases for accelerating function annotation and for drug development endeavors.

  6. Preliminary shuttle structural dynamics modeling design study

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The design and development of a structural dynamics model of the space shuttle are discussed. The model provides for early study of structural dynamics problems, permits evaluation of the accuracy of the structural and hydroelastic analysis methods used on test vehicles, and provides for efficiently evaluating potential cost savings in structural dynamic testing techniques. The discussion is developed around the modes in which major input forces and responses occur and the significant structural details in these modes.

  7. LYRA, a webserver for lymphocyte receptor structural modeling.

    PubMed

    Klausen, Michael Schantz; Anderson, Mads Valdemar; Jespersen, Martin Closter; Nielsen, Morten; Marcatili, Paolo

    2015-07-01

    The accurate structural modeling of B- and T-cell receptors is fundamental to gain a detailed insight in the mechanisms underlying immunity and in developing new drugs and therapies. The LYRA (LYmphocyte Receptor Automated modeling) web server (http://www.cbs.dtu.dk/services/LYRA/) implements a complete and automated method for building of B- and T-cell receptor structural models starting from their amino acid sequence alone. The webserver is freely available and easy to use for non-specialists. Upon submission, LYRA automatically generates alignments using ad hoc profiles, predicts the structural class of each hypervariable loop, selects the best templates in an automatic fashion, and provides within minutes a complete 3D model that can be downloaded or inspected online. Experienced users can manually select or exclude template structures according to case specific information. LYRA is based on the canonical structure method, that in the last 30 years has been successfully used to generate antibody models of high accuracy, and in our benchmarks this approach proves to achieve similarly good results on TCR modeling, with a benchmarked average RMSD accuracy of 1.29 and 1.48 Å for B- and T-cell receptors, respectively. To the best of our knowledge, LYRA is the first automated server for the prediction of TCR structure.

  8. LYRA, a webserver for lymphocyte receptor structural modeling

    PubMed Central

    Klausen, Michael Schantz; Anderson, Mads Valdemar; Jespersen, Martin Closter; Nielsen, Morten; Marcatili, Paolo

    2015-01-01

    The accurate structural modeling of B- and T-cell receptors is fundamental to gain a detailed insight in the mechanisms underlying immunity and in developing new drugs and therapies. The LYRA (LYmphocyte Receptor Automated modeling) web server (http://www.cbs.dtu.dk/services/LYRA/) implements a complete and automated method for building of B- and T-cell receptor structural models starting from their amino acid sequence alone. The webserver is freely available and easy to use for non-specialists. Upon submission, LYRA automatically generates alignments using ad hoc profiles, predicts the structural class of each hypervariable loop, selects the best templates in an automatic fashion, and provides within minutes a complete 3D model that can be downloaded or inspected online. Experienced users can manually select or exclude template structures according to case specific information. LYRA is based on the canonical structure method, that in the last 30 years has been successfully used to generate antibody models of high accuracy, and in our benchmarks this approach proves to achieve similarly good results on TCR modeling, with a benchmarked average RMSD accuracy of 1.29 and 1.48 Å for B- and T-cell receptors, respectively. To the best of our knowledge, LYRA is the first automated server for the prediction of TCR structure. PMID:26007650

  9. Formal representation of 3D structural geological models

    NASA Astrophysics Data System (ADS)

    Wang, Zhangang; Qu, Honggang; Wu, Zixing; Yang, Hongjun; Du, Qunle

    2016-05-01

    The development and widespread application of geological modeling methods has increased demands for the integration and sharing services of three dimensional (3D) geological data. However, theoretical research in the field of geological information sciences is limited despite the widespread use of Geographic Information Systems (GIS) in geology. In particular, fundamental research on the formal representations and standardized spatial descriptions of 3D structural models is required. This is necessary for accurate understanding and further applications of geological data in 3D space. In this paper, we propose a formal representation method for 3D structural models using the theory of point set topology, which produces a mathematical definition for the major types of geological objects. The spatial relationships between geologic boundaries, structures, and units are explained in detail using the 9-intersection model. Reasonable conditions for describing the topological space of 3D structural models are also provided. The results from this study can be used as potential support for the standardized representation and spatial quality evaluation of 3D structural models, as well as for specific needs related to model-based management, query, and analysis.

  10. Assessing psychopathology from a structural perspective: A psychodynamic model.

    PubMed

    Trimboli, Frank; Marshall, Rycke L; Keenan, Charles W

    2013-01-01

    This article presents a guide for conceptualizing psychological difficulties across the broad spectrum of personality and symptom disorders. A psychodynamic model is used to organize these disorders along a structural continuum of severity. The authors propose that seven key indices of personality functioning be evaluated: cognition, affect, self-object relations, interpersonal relations, defenses, superego functioning, and primary dynamics. The results are then employed to determine where the patient should be placed along a continuum of nine diagnostic categories of ego development and their associated disorders. These include "normal," neurotic trait, and neurotic symptom organization; high-, mid-, and low-level borderline organization; and affective, cognitive-affective, and cognitive psychotic organization. An accurate evaluation of the seven variables will permit a more precise formulation of the nature and severity of the patient's difficulties, which will hopefully result in more accurate and appropriate treatment planning. Examples of the application of this model to a common symptom complaint are provided. PMID:23697819

  11. Engine Structures Modeling Software System (ESMOSS)

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Engine Structures Modeling Software System (ESMOSS) is the development of a specialized software system for the construction of geometric descriptive and discrete analytical models of engine parts, components, and substructures which can be transferred to finite element analysis programs such as NASTRAN. The NASA Lewis Engine Structures Program is concerned with the development of technology for the rational structural design and analysis of advanced gas turbine engines with emphasis on advanced structural analysis, structural dynamics, structural aspects of aeroelasticity, and life prediction. Fundamental and common to all of these developments is the need for geometric and analytical model descriptions at various engine assembly levels which are generated using ESMOSS.

  12. Structural and functional screening in human induced-pluripotent stem cell-derived cardiomyocytes accurately identifies cardiotoxicity of multiple drug types

    SciTech Connect

    Doherty, Kimberly R. Talbert, Dominique R.; Trusk, Patricia B.; Moran, Diarmuid M.; Shell, Scott A.; Bacus, Sarah

    2015-05-15

    Safety pharmacology studies that evaluate new drug entities for potential cardiac liability remain a critical component of drug development. Current studies have shown that in vitro tests utilizing human induced pluripotent stem cell-derived cardiomyocytes (hiPS-CM) may be beneficial for preclinical risk evaluation. We recently demonstrated that an in vitro multi-parameter test panel assessing overall cardiac health and function could accurately reflect the associated clinical cardiotoxicity of 4 FDA-approved targeted oncology agents using hiPS-CM. The present studies expand upon this initial observation to assess whether this in vitro screen could detect cardiotoxicity across multiple drug classes with known clinical cardiac risks. Thus, 24 drugs were examined for their effect on both structural (viability, reactive oxygen species generation, lipid formation, troponin secretion) and functional (beating activity) endpoints in hiPS-CM. Using this screen, the cardiac-safe drugs showed no effects on any of the tests in our panel. However, 16 of 18 compounds with known clinical cardiac risk showed drug-induced changes in hiPS-CM by at least one method. Moreover, when taking into account the Cmax values, these 16 compounds could be further classified depending on whether the effects were structural, functional, or both. Overall, the most sensitive test assessed cardiac beating using the xCELLigence platform (88.9%) while the structural endpoints provided additional insight into the mechanism of cardiotoxicity for several drugs. These studies show that a multi-parameter approach examining both cardiac cell health and function in hiPS-CM provides a comprehensive and robust assessment that can aid in the determination of potential cardiac liability. - Highlights: • 24 drugs were tested for cardiac liability using an in vitro multi-parameter screen. • Changes in beating activity were the most sensitive in predicting cardiac risk. • Structural effects add in

  13. A new set of atomic radii for accurate estimation of solvation free energy by Poisson-Boltzmann solvent model.

    PubMed

    Yamagishi, Junya; Okimoto, Noriaki; Morimoto, Gentaro; Taiji, Makoto

    2014-11-01

    The Poisson-Boltzmann implicit solvent (PB) is widely used to estimate the solvation free energies of biomolecules in molecular simulations. An optimized set of atomic radii (PB radii) is an important parameter for PB calculations, which determines the distribution of dielectric constants around the solute. We here present new PB radii for the AMBER protein force field to accurately reproduce the solvation free energies obtained from explicit solvent simulations. The presented PB radii were optimized using results from explicit solvent simulations of the large systems. In addition, we discriminated PB radii for N- and C-terminal residues from those for nonterminal residues. The performances using our PB radii showed high accuracy for the estimation of solvation free energies at the level of the molecular fragment. The obtained PB radii are effective for the detailed analysis of the solvation effects of biomolecules.

  14. A structured population modeling framework for quantifying and predicting gene expression noise in flow cytometry data.

    PubMed

    Flores, Kevin B

    2013-07-01

    We formulated a structured population model with distributed parameters to identify mechanisms that contribute to gene expression noise in time-dependent flow cytometry data. The model was validated using cell population-level gene expression data from two experiments with synthetically engineered eukaryotic cells. Our model captures the qualitative noise features of both experiments and accurately fit the data from the first experiment. Our results suggest that cellular switching between high and low expression states and transcriptional re-initiation are important factors needed to accurately describe gene expression noise with a structured population model.

  15. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  16. Insidious Structural Errors in Latent Variable Models.

    ERIC Educational Resources Information Center

    Pohlmann, John T.

    1993-01-01

    Nonlinear relationships and latent variable assumptions can lead to serious specification errors in structural models. A quadratic relationship, described by a linear structural model with a latent variable, is shown to have less predictive validity than a simple manifest variable regression model. Advocates the use of simpler preliminary…

  17. Structural Equation Modeling in Special Education Research.

    ERIC Educational Resources Information Center

    Moore, Alan D.

    1995-01-01

    This article suggests the use of structural equation modeling in special education research, to analyze multivariate data from both nonexperimental and experimental research. It combines a structural model linking latent variables and a measurement model linking observed variables with latent variables. (Author/DB)

  18. Finite Element Model Development and Validation for Aircraft Fuselage Structures

    NASA Technical Reports Server (NTRS)

    Buehrle, Ralph D.; Fleming, Gary A.; Pappa, Richard S.; Grosveld, Ferdinand W.

    2000-01-01

    The ability to extend the valid frequency range for finite element based structural dynamic predictions using detailed models of the structural components and attachment interfaces is examined for several stiffened aircraft fuselage structures. This extended dynamic prediction capability is needed for the integration of mid-frequency noise control technology. Beam, plate and solid element models of the stiffener components are evaluated. Attachment models between the stiffener and panel skin range from a line along the rivets of the physical structure to a constraint over the entire contact surface. The finite element models are validated using experimental modal analysis results. The increased frequency range results in a corresponding increase in the number of modes, modal density and spatial resolution requirements. In this study, conventional modal tests using accelerometers are complemented with Scanning Laser Doppler Velocimetry and Electro-Optic Holography measurements to further resolve the spatial response characteristics. Whenever possible, component and subassembly modal tests are used to validate the finite element models at lower levels of assembly. Normal mode predictions for different finite element representations of components and assemblies are compared with experimental results to assess the most accurate techniques for modeling aircraft fuselage type structures.

  19. Robust simulation of buckled structures using reduced order modeling

    NASA Astrophysics Data System (ADS)

    Wiebe, R.; Perez, R. A.; Spottswood, S. M.

    2016-09-01

    Lightweight metallic structures are a mainstay in aerospace engineering. For these structures, stability, rather than strength, is often the critical limit state in design. For example, buckling of panels and stiffeners may occur during emergency high-g maneuvers, while in supersonic and hypersonic aircraft, it may be induced by thermal stresses. The longstanding solution to such challenges was to increase the sizing of the structural members, which is counter to the ever present need to minimize weight for reasons of efficiency and performance. In this work we present some recent results in the area of reduced order modeling of post- buckled thin beams. A thorough parametric study of the response of a beam to changing harmonic loading parameters, which is useful in exposing complex phenomena and exercising numerical models, is presented. Two error metrics that use but require no time stepping of a (computationally expensive) truth model are also introduced. The error metrics are applied to several interesting forcing parameter cases identified from the parametric study and are shown to yield useful information about the quality of a candidate reduced order model. Parametric studies, especially when considering forcing and structural geometry parameters, coupled environments, and uncertainties would be computationally intractable with finite element models. The goal is to make rapid simulation of complex nonlinear dynamic behavior possible for distributed systems via fast and accurate reduced order models. This ability is crucial in allowing designers to rigorously probe the robustness of their designs to account for variations in loading, structural imperfections, and other uncertainties.

  20. Outer planet probe engineering model structural tests

    NASA Technical Reports Server (NTRS)

    Smittkamp, J. A.; Gustin, W. H.; Griffin, M. W.

    1977-01-01

    A series of proof of concept structural tests was performed on an engineering model of the Outer Planets Atmospheric Entry Probe. The tests consisted of pyrotechnic shock, dynamic and static loadings. The tests partially verified the structural concept.

  1. Bayesian Data-Model Fit Assessment for Structural Equation Modeling

    ERIC Educational Resources Information Center

    Levy, Roy

    2011-01-01

    Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…

  2. Model reduction for flexible space structures

    NASA Technical Reports Server (NTRS)

    Gawronski, Wodek; Williams, Trevor

    1989-01-01

    This paper presents the conditions under which modal truncation yields a near-optimal reduced-order model for a flexible structure. Next, a robust model reduction technique to cope with the damping uncertainties typical of flexible space structure is developed. Finally, a flexible truss and the COFS-1 structure are used to give realistic applications for the model reduction techniques studied in the paper.

  3. Can AERONET data be used to accurately model the monochromatic beam and circumsolar irradiances under cloud-free conditions in desert environment?

    NASA Astrophysics Data System (ADS)

    Eissa, Y.; Blanc, P.; Wald, L.; Ghedira, H.

    2015-07-01

    Routine measurements of the beam irradiance at normal incidence (DNI) include the irradiance originating from within the extent of the solar disc only (DNIS) whose angular extent is 0.266° ± 1.7 %, and that from a larger circumsolar region, called the circumsolar normal irradiance (CSNI). This study investigates if the spectral aerosol optical properties of the AERONET stations are sufficient for an accurate modelling of the monochromatic DNIS and CSNI under cloud-free conditions in a desert environment. The data from an AERONET station in Abu Dhabi, United Arab Emirates, and a collocated Sun and Aureole Measurement (SAM) instrument which offers reference measurements of the monochromatic profile of solar radiance, were exploited. Using the AERONET data both the radiative transfer models libRadtran and SMARTS offer an accurate estimate of the monochromatic DNIS, with a relative root mean square error (RMSE) of 5 %, a relative bias of +1 % and acoefficient of determination greater than 0.97. After testing two configurations in SMARTS and three in libRadtran for modelling the monochromatic CSNI, libRadtran exhibits the most accurate results when the AERONET aerosol phase function is presented as a Two Term Henyey-Greenstein phase function. In this case libRadtran exhibited a relative RMSE and a bias of respectively 22 and -19 % and a coefficient of determination of 0.89. The results are promising and pave the way towards reporting the contribution of the broadband circumsolar irradiance to standard DNI measurements.

  4. Towards accurate structural characterization of metal centres in protein crystals: the structures of Ni and Cu T{sub 6} bovine insulin derivatives

    SciTech Connect

    Frankaer, Christian Grundahl; Mossin, Susanne; Ståhl, Kenny; Harris, Pernille

    2014-01-01

    The level of structural detail around the metal sites in Ni{sup 2+} and Cu{sup 2+} T{sub 6} insulin derivatives was significantly improved by using a combination of single-crystal X-ray crystallography and X-ray absorption spectroscopy. Photoreduction and subsequent radiation damage of the Cu{sup 2+} sites in Cu insulin was followed by XANES spectroscopy. Using synchrotron radiation (SR), the crystal structures of T{sub 6} bovine insulin complexed with Ni{sup 2+} and Cu{sup 2+} were solved to 1.50 and 1.45 Å resolution, respectively. The level of detail around the metal centres in these structures was highly limited, and the coordination of water in Cu site II of the copper insulin derivative was deteriorated as a consequence of radiation damage. To provide more detail, X-ray absorption spectroscopy (XAS) was used to improve the information level about metal coordination in each derivative. The nickel derivative contains hexacoordinated Ni{sup 2+} with trigonal symmetry, whereas the copper derivative contains tetragonally distorted hexacoordinated Cu{sup 2+} as a result of the Jahn–Teller effect, with a significantly longer coordination distance for one of the three water molecules in the coordination sphere. That the copper centre is of type II was further confirmed by electron paramagnetic resonance (EPR). The coordination distances were refined from EXAFS with standard deviations within 0.01 Å. The insulin derivative containing Cu{sup 2+} is sensitive towards photoreduction when exposed to SR. During the reduction of Cu{sup 2+} to Cu{sup +}, the coordination geometry of copper changes towards lower coordination numbers. Primary damage, i.e. photoreduction, was followed directly by XANES as a function of radiation dose, while secondary damage in the form of structural changes around the Cu atoms after exposure to different radiation doses was studied by crystallography using a laboratory diffractometer. Protection against photoreduction and subsequent

  5. Challenges in structural approaches to cell modeling.

    PubMed

    Im, Wonpil; Liang, Jie; Olson, Arthur; Zhou, Huan-Xiang; Vajda, Sandor; Vakser, Ilya A

    2016-07-31

    Computational modeling is essential for structural characterization of biomolecular mechanisms across the broad spectrum of scales. Adequate understanding of biomolecular mechanisms inherently involves our ability to model them. Structural modeling of individual biomolecules and their interactions has been rapidly progressing. However, in terms of the broader picture, the focus is shifting toward larger systems, up to the level of a cell. Such modeling involves a more dynamic and realistic representation of the interactomes in vivo, in a crowded cellular environment, as well as membranes and membrane proteins, and other cellular components. Structural modeling of a cell complements computational approaches to cellular mechanisms based on differential equations, graph models, and other techniques to model biological networks, imaging data, etc. Structural modeling along with other computational and experimental approaches will provide a fundamental understanding of life at the molecular level and lead to important applications to biology and medicine. A cross section of diverse approaches presented in this review illustrates the developing shift from the structural modeling of individual molecules to that of cell biology. Studies in several related areas are covered: biological networks; automated construction of three-dimensional cell models using experimental data; modeling of protein complexes; prediction of non-specific and transient protein interactions; thermodynamic and kinetic effects of crowding; cellular membrane modeling; and modeling of chromosomes. The review presents an expert opinion on the current state-of-the-art in these various aspects of structural modeling in cellular biology, and the prospects of future developments in this emerging field. PMID:27255863

  6. Challenges in structural approaches to cell modeling.

    PubMed

    Im, Wonpil; Liang, Jie; Olson, Arthur; Zhou, Huan-Xiang; Vajda, Sandor; Vakser, Ilya A

    2016-07-31

    Computational modeling is essential for structural characterization of biomolecular mechanisms across the broad spectrum of scales. Adequate understanding of biomolecular mechanisms inherently involves our ability to model them. Structural modeling of individual biomolecules and their interactions has been rapidly progressing. However, in terms of the broader picture, the focus is shifting toward larger systems, up to the level of a cell. Such modeling involves a more dynamic and realistic representation of the interactomes in vivo, in a crowded cellular environment, as well as membranes and membrane proteins, and other cellular components. Structural modeling of a cell complements computational approaches to cellular mechanisms based on differential equations, graph models, and other techniques to model biological networks, imaging data, etc. Structural modeling along with other computational and experimental approaches will provide a fundamental understanding of life at the molecular level and lead to important applications to biology and medicine. A cross section of diverse approaches presented in this review illustrates the developing shift from the structural modeling of individual molecules to that of cell biology. Studies in several related areas are covered: biological networks; automated construction of three-dimensional cell models using experimental data; modeling of protein complexes; prediction of non-specific and transient protein interactions; thermodynamic and kinetic effects of crowding; cellular membrane modeling; and modeling of chromosomes. The review presents an expert opinion on the current state-of-the-art in these various aspects of structural modeling in cellular biology, and the prospects of future developments in this emerging field.

  7. Accurate Quantification of Ionospheric State Based on Comprehensive Radiative Transfer Modeling and Optimal Inversion of the OI 135.6-nm Emission

    NASA Astrophysics Data System (ADS)

    Qin, J.; Kamalabadi, F.; Makela, J. J.; Meier, R. J.

    2015-12-01

    Remote sensing of the nighttime OI 135.6-nm emission represents the primary means of quantifying the F-region ionospheric state from optical measurements. Despite its pervasive use for studying aeronomical processes, the interpretation of these emissions as a proxy for ionospheric state remains ambiguous in that the relative contributions of radiative recombination and mutual neutralization to the production and, especially, the effects of scattering and absorption on the transport of the 135.6-nm emissions have not been fully quantified. Moreover, an inversion algorithm, which is robust to varying ionospheric structures under different geophysical conditions, is yet to be developed for statistically optimal characterization of the ionospheric state. In this work, as part of the NASA ICON mission, we develop a comprehensive radiative transfer model from first principle to investigate the production and transport of the nighttime 135.6-nm emissions. The forward modeling investigation indicates that under certain conditions mutual neutralization can contribute up to ~38% to the 135.6-nm emissions. Moreover, resonant scattering and pure absorption can reduce the brightness observed in the limb direction by ~40% while enhancing the brightness in the nadir direction by ~25%. Further analysis shows that without properly addressing these effects in the inversion process, the peak electron density in the F-region ionosphere (NmF2) can be overestimated by up to ~24%. To address these issues, an inversion algorithm that properly accounts for the above-mentioned effects is proposed for accurate quantification of the ionospheric state using satellite measurements. The ill-posedness due to the intrinsic presence of noise in real data is coped with by incorporating proper regularizations that enforce either global smoothness or piecewise smoothness of the solution. Application to model-generated data with different signal-to-noise ratios show that the algorithm has achieved

  8. Modeling, Analysis, and Optimization Issues for Large Space Structures

    NASA Technical Reports Server (NTRS)

    Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)

    1983-01-01

    Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.

  9. SSME structural dynamic model development

    NASA Technical Reports Server (NTRS)

    Foley, M. J.; Tilley, D. M.; Welch, C. T.

    1983-01-01

    A mathematical model of the Space Shuttle Main Engine (SSME) as a complete assembly, with detailed emphasis on LOX and High Fuel Turbopumps is developed. The advantages of both complete engine dynamics, and high fidelity modeling are incorporated. Development of this model, some results, and projected applications are discussed.

  10. Modeling of Protein Binary Complexes Using Structural Mass Spectrometry Data

    SciTech Connect

    Amisha Kamal,J.; Chance, M.

    2008-01-01

    In this article, we describe a general approach to modeling the structure of binary protein complexes using structural mass spectrometry data combined with molecular docking. In the first step, hydroxyl radical mediated oxidative protein footprinting is used to identify residues that experience conformational reorganization due to binding or participate in the binding interface. In the second step, a three-dimensional atomic structure of the complex is derived by computational modeling. Homology modeling approaches are used to define the structures of the individual proteins if footprinting detects significant conformational reorganization as a function of complex formation. A three-dimensional model of the complex is constructed from these binary partners using the ClusPro program, which is composed of docking, energy filtering, and clustering steps. Footprinting data are used to incorporate constraints--positive and/or negative--in the docking step and are also used to decide the type of energy filter--electrostatics or desolvation--in the successive energy-filtering step. By using this approach, we examine the structure of a number of binary complexes of monomeric actin and compare the results to crystallographic data. Based on docking alone, a number of competing models with widely varying structures are observed, one of which is likely to agree with crystallographic data. When the docking steps are guided by footprinting data, accurate models emerge as top scoring. We demonstrate this method with the actin/gelsolin segment-1 complex. We also provide a structural model for the actin/cofilin complex using this approach which does not have a crystal or NMR structure.

  11. Prognostic models and risk scores: can we accurately predict postoperative nausea and vomiting in children after craniotomy?

    PubMed

    Neufeld, Susan M; Newburn-Cook, Christine V; Drummond, Jane E

    2008-10-01

    Postoperative nausea and vomiting (PONV) is a problem for many children after craniotomy. Prognostic models and risk scores help identify who is at risk for an adverse event such as PONV to help guide clinical care. The purpose of this article is to assess whether an existing prognostic model or risk score can predict PONV in children after craniotomy. The concepts of transportability, calibration, and discrimination are presented to identify what is required to have a valid tool for clinical use. Although previous work may inform clinical practice and guide future research, existing prognostic models and risk scores do not appear to be options for predicting PONV in children undergoing craniotomy. However, until risk factors are further delineated, followed by the development and validation of prognostic models and risk scores that include children after craniotomy, clinical judgment in the context of current research may serve as a guide for clinical care in this population. PMID:18939320

  12. Coupling 1D Navier Stokes equation with autoregulation lumped parameter networks for accurate cerebral blood flow modeling

    NASA Astrophysics Data System (ADS)

    Ryu, Jaiyoung; Hu, Xiao; Shadden, Shawn C.

    2014-11-01

    The cerebral circulation is unique in its ability to maintain blood flow to the brain under widely varying physiologic conditions. Incorporating this autoregulatory response is critical to cerebral blood flow modeling, as well as investigations into pathological conditions. We discuss a one-dimensional nonlinear model of blood flow in the cerebral arteries that includes coupling of autoregulatory lumped parameter networks. The model is tested to reproduce a common clinical test to assess autoregulatory function - the carotid artery compression test. The change in the flow velocity at the middle cerebral artery (MCA) during carotid compression and release demonstrated strong agreement with published measurements. The model is then used to investigate vasospasm of the MCA, a common clinical concern following subarachnoid hemorrhage. Vasospasm was modeled by prescribing vessel area reduction in the middle portion of the MCA. Our model showed similar increases in velocity for moderate vasospasms, however, for serious vasospasm (~ 90% area reduction), the blood flow velocity demonstrated decrease due to blood flow rerouting. This demonstrates a potentially important phenomenon, which otherwise would lead to false-negative decisions on clinical vasospasm if not properly anticipated.

  13. A hybrid stochastic-deterministic computational model accurately describes spatial dynamics and virus diffusion in HIV-1 growth competition assay.

    PubMed

    Immonen, Taina; Gibson, Richard; Leitner, Thomas; Miller, Melanie A; Arts, Eric J; Somersalo, Erkki; Calvetti, Daniela

    2012-11-01

    We present a new hybrid stochastic-deterministic, spatially distributed computational model to simulate growth competition assays on a relatively immobile monolayer of peripheral blood mononuclear cells (PBMCs), commonly used for determining ex vivo fitness of human immunodeficiency virus type-1 (HIV-1). The novel features of our approach include incorporation of viral diffusion through a deterministic diffusion model while simulating cellular dynamics via a stochastic Markov chain model. The model accounts for multiple infections of target cells, CD4-downregulation, and the delay between the infection of a cell and the production of new virus particles. The minimum threshold level of infection induced by a virus inoculum is determined via a series of dilution experiments, and is used to determine the probability of infection of a susceptible cell as a function of local virus density. We illustrate how this model can be used for estimating the distribution of cells infected by either a single virus type or two competing viruses. Our model captures experimentally observed variation in the fitness difference between two virus strains, and suggests a way to minimize variation and dual infection in experiments.

  14. Structural and functional screening in human induced-pluripotent stem cell-derived cardiomyocytes accurately identifies cardiotoxicity of multiple drug types.

    PubMed

    Doherty, Kimberly R; Talbert, Dominique R; Trusk, Patricia B; Moran, Diarmuid M; Shell, Scott A; Bacus, Sarah

    2015-05-15

    Safety pharmacology studies that evaluate new drug entities for potential cardiac liability remain a critical component of drug development. Current studies have shown that in vitro tests utilizing human induced pluripotent stem cell-derived cardiomyocytes (hiPS-CM) may be beneficial for preclinical risk evaluation. We recently demonstrated that an in vitro multi-parameter test panel assessing overall cardiac health and function could accurately reflect the associated clinical cardiotoxicity of 4 FDA-approved targeted oncology agents using hiPS-CM. The present studies expand upon this initial observation to assess whether this in vitro screen could detect cardiotoxicity across multiple drug classes with known clinical cardiac risks. Thus, 24 drugs were examined for their effect on both structural (viability, reactive oxygen species generation, lipid formation, troponin secretion) and functional (beating activity) endpoints in hiPS-CM. Using this screen, the cardiac-safe drugs showed no effects on any of the tests in our panel. However, 16 of 18 compounds with known clinical cardiac risk showed drug-induced changes in hiPS-CM by at least one method. Moreover, when taking into account the Cmax values, these 16 compounds could be further classified depending on whether the effects were structural, functional, or both. Overall, the most sensitive test assessed cardiac beating using the xCELLigence platform (88.9%) while the structural endpoints provided additional insight into the mechanism of cardiotoxicity for several drugs. These studies show that a multi-parameter approach examining both cardiac cell health and function in hiPS-CM provides a comprehensive and robust assessment that can aid in the determination of potential cardiac liability.

  15. Detailed and Highly Accurate 3d Models of High Mountain Areas by the Macs-Himalaya Aerial Camera Platform

    NASA Astrophysics Data System (ADS)

    Brauchle, J.; Hein, D.; Berger, R.

    2015-04-01

    Remote sensing in areas with extreme altitude differences is particularly challenging. In high mountain areas specifically, steep slopes result in reduced ground pixel resolution and degraded quality in the DEM. Exceptionally high brightness differences can in part no longer be imaged by the sensors. Nevertheless, detailed information about mountainous regions is highly relevant: time and again glacier lake outburst floods (GLOFs) and debris avalanches claim dozens of victims. Glaciers are sensitive to climate change and must be carefully monitored. Very detailed and accurate 3D maps provide a basic tool for the analysis of natural hazards and the monitoring of glacier surfaces in high mountain areas. There is a gap here, because the desired accuracies are often not achieved. It is for this reason that the DLR Institute of Optical Sensor Systems has developed a new aerial camera, the MACS-Himalaya. The measuring unit comprises four camera modules with an overall aperture angle of 116° perpendicular to the direction of flight. A High Dynamic Range (HDR) mode was introduced so that within a scene, bright areas such as sun-flooded snow and dark areas such as shaded stone can be imaged. In 2014, a measuring survey was performed on the Nepalese side of the Himalayas. The remote sensing system was carried by a Stemme S10 motor glider. Amongst other targets, the Seti Valley, Kali-Gandaki Valley and the Mt. Everest/Khumbu Region were imaged at heights up to 9,200 m. Products such as dense point clouds, DSMs and true orthomosaics with a ground pixel resolution of up to 15 cm were produced. Special challenges and gaps in the investigation of high mountain areas, approaches for resolution of these problems, the camera system and the state of evaluation are presented with examples.

  16. Structure formation in the quasispherical Szekeres model

    SciTech Connect

    Bolejko, Krzysztof

    2006-06-15

    Structure formation in the Szekeres model is investigated. Since the Szekeres model is an inhomogeneous model with no symmetries, it is possible to examine the interaction of neighboring structures and its impact on the growth of a density contrast. It has been found that the mass flow from voids to clusters enhances the growth of the density contrast. In the model presented here, the growth of the density contrast is almost 8 times faster than in the linear approach.

  17. Modeling and identification of flexible joints in vehicle structures

    NASA Astrophysics Data System (ADS)

    Lee, Kwangju

    A simple, design-oriented model of joints in vehicle structures is developed. This model accounts for the flexibility, the offsets of rotation centers of joint branches, and the coupling between rotations of a joint branch in different planes. The model parameters consist of torsional spring rates, the coordinates of the flexible hinges, and the orientations of planes in which the torsional springs are located. The model parameters are selected to be physically meaningful. In some cases, the behavior of joints can be accurately represented by using simpler models. The conditions under which the joint model can be simplified are discussed. A family of joint models with different levels of complexity are also defined. A probabilistic system identification is used to estimate the joint parameters by using the measured displacements. The parameters are estimated by minimizing the discrepancies between the measured and predicted displacements. Statistical tests which identify important parameters are also presented. These tests can be used to simplify the joint models without significantly reducing the accuracy in predicting structural responses. The identification methodology is applied to automotive structures with joints and also to isolated subassemblies consisting of joints and attached branches.

  18. Accurate relativistic adapted Gaussian basis sets for francium through Ununoctium without variational prolapse and to be used with both uniform sphere and Gaussian nucleus models.

    PubMed

    Teodoro, Tiago Quevedo; Haiduke, Roberto Luiz Andrade

    2013-10-15

    Accurate relativistic adapted Gaussian basis sets (RAGBSs) for 87 Fr up to 118 Uuo atoms without variational prolapse were developed here with the use of a polynomial version of the Generator Coordinate Dirac-Fock method. Two finite nuclear models have been used, the Gaussian and uniform sphere models. The largest RAGBS error, with respect to numerical Dirac-Fock results, is 15.4 miliHartree for Ununoctium with a basis set size of 33s30p19d14f functions. PMID:23913741

  19. A mutate-and-map strategy accurately infers the base pairs of a 35-nucleotide model RNA

    PubMed Central

    Kladwang, Wipapat; Cordero, Pablo; Das, Rhiju

    2011-01-01

    We present a rapid experimental strategy for inferring base pairs in structured RNAs via an information-rich extension of classic chemical mapping approaches. The mutate-and-map method, previously applied to a DNA/RNA helix, systematically searches for single mutations that enhance the chemical accessibility of base-pairing partners distant in sequence. To test this strategy for structured RNAs, we have carried out mutate-and-map measurements for a 35-nt hairpin, called the MedLoop RNA, embedded within an 80-nt sequence. We demonstrate the synthesis of all 105 single mutants of the MedLoop RNA sequence and present high-throughput DMS, CMCT, and SHAPE modification measurements for this library at single-nucleotide resolution. The resulting two-dimensional data reveal visually clear, punctate features corresponding to RNA base pair interactions as well as more complex features; these signals can be qualitatively rationalized by comparison to secondary structure predictions. Finally, we present an automated, sequence-blind analysis that permits the confident identification of nine of the 10 MedLoop RNA base pairs at single-nucleotide resolution, while discriminating against all 1460 false-positive base pairs. These results establish the accuracy and information content of the mutate-and-map strategy and support its feasibility for rapidly characterizing the base-pairing patterns of larger and more complex RNA systems. PMID:21239468

  20. Efficient and physically accurate modeling and simulation of anisoplanatic imaging through the atmosphere: a space-variant volumetric image blur method

    NASA Astrophysics Data System (ADS)

    Reinhardt, Colin N.; Ritcey, James A.

    2015-09-01

    We present a novel method for efficient and physically-accurate modeling & simulation of anisoplanatic imaging through the atmosphere; in particular we present a new space-variant volumetric image blur algorithm. The method is based on the use of physical atmospheric meteorology models, such as vertical turbulence profiles and aerosol/molecular profiles which can be in general fully spatially-varying in 3 dimensions and also evolving in time. The space-variant modeling method relies on the metadata provided by 3D computer graphics modeling and rendering systems to decompose the image into a set of slices which can be treated in an independent but physically consistent manner to achieve simulated image blur effects which are more accurate and realistic than the homogeneous and stationary blurring methods which are commonly used today. We also present a simple illustrative example of the application of our algorithm, and show its results and performance are in agreement with the expected relative trends and behavior of the prescribed turbulence profile physical model used to define the initial spatially-varying environmental scenario conditions. We present the details of an efficient Fourier-transform-domain formulation of the SV volumetric blur algorithm and detailed algorithm pseudocode description of the method implementation and clarification of some nonobvious technical details.

  1. CLASH-VLT: INSIGHTS ON THE MASS SUBSTRUCTURES IN THE FRONTIER FIELDS CLUSTER MACS J0416.1–2403 THROUGH ACCURATE STRONG LENS MODELING

    SciTech Connect

    Grillo, C.; Suyu, S. H.; Umetsu, K.; Rosati, P.; Caminha, G. B.; Mercurio, A.; Balestra, I.; Munari, E.; Nonino, M.; De Lucia, G.; Borgani, S.; Biviano, A.; Girardi, M.; Lombardi, M.; Gobat, R.; Zitrin, A.; Halkola, A. and others

    2015-02-10

    We present a detailed mass reconstruction and a novel study on the substructure properties in the core of the Cluster Lensing And Supernova survey with Hubble (CLASH) and Frontier Fields galaxy cluster MACS J0416.1–2403. We show and employ our extensive spectroscopic data set taken with the VIsible Multi-Object Spectrograph instrument as part of our CLASH-VLT program, to confirm spectroscopically 10 strong lensing systems and to select a sample of 175 plausible cluster members to a limiting stellar mass of log (M {sub *}/M {sub ☉}) ≅ 8.6. We reproduce the measured positions of a set of 30 multiple images with a remarkable median offset of only 0.''3 by means of a comprehensive strong lensing model comprised of two cluster dark-matter halos, represented by cored elliptical pseudo-isothermal mass distributions, and the cluster member components, parameterized with dual pseudo-isothermal total mass profiles. The latter have total mass-to-light ratios increasing with the galaxy HST/WFC3 near-IR (F160W) luminosities. The measurement of the total enclosed mass within the Einstein radius is accurate to ∼5%, including the systematic uncertainties estimated from six distinct mass models. We emphasize that the use of multiple-image systems with spectroscopic redshifts and knowledge of cluster membership based on extensive spectroscopic information is key to constructing robust high-resolution mass maps. We also produce magnification maps over the central area that is covered with HST observations. We investigate the galaxy contribution, both in terms of total and stellar mass, to the total mass budget of the cluster. When compared with the outcomes of cosmological N-body simulations, our results point to a lack of massive subhalos in the inner regions of simulated clusters with total masses similar to that of MACS J0416.1–2403. Our findings of the location and shape of the cluster dark-matter halo density profiles and on the cluster substructures provide intriguing

  2. Can Impacts of Climate Change and Agricultural Adaptation Strategies Be Accurately Quantified if Crop Models Are Annually Re-Initialized?

    PubMed Central

    Basso, Bruno; Hyndman, David W.; Kendall, Anthony D.; Grace, Peter R.; Robertson, G. Philip

    2015-01-01

    Estimates of climate change impacts on global food production are generally based on statistical or process-based models. Process-based models can provide robust predictions of agricultural yield responses to changing climate and management. However, applications of these models often suffer from bias due to the common practice of re-initializing soil conditions to the same state for each year of the forecast period. If simulations neglect to include year-to-year changes in initial soil conditions and water content related to agronomic management, adaptation and mitigation strategies designed to maintain stable yields under climate change cannot be properly evaluated. We apply a process-based crop system model that avoids re-initialization bias to demonstrate the importance of simulating both year-to-year and cumulative changes in pre-season soil carbon, nutrient, and water availability. Results are contrasted with simulations using annual re-initialization, and differences are striking. We then demonstrate the potential for the most likely adaptation strategy to offset climate change impacts on yields using continuous simulations through the end of the 21st century. Simulations that annually re-initialize pre-season soil carbon and water contents introduce an inappropriate yield bias that obscures the potential for agricultural management to ameliorate the deleterious effects of rising temperatures and greater rainfall variability. PMID:26043188

  3. Can Impacts of Climate Change and Agricultural Adaptation Strategies Be Accurately Quantified if Crop Models Are Annually Re-Initialized?

    PubMed

    Basso, Bruno; Hyndman, David W; Kendall, Anthony D; Grace, Peter R; Robertson, G Philip

    2015-01-01

    Estimates of climate change impacts on global food production are generally based on statistical or process-based models. Process-based models can provide robust predictions of agricultural yield responses to changing climate and management. However, applications of these models often suffer from bias due to the common practice of re-initializing soil conditions to the same state for each year of the forecast period. If simulations neglect to include year-to-year changes in initial soil conditions and water content related to agronomic management, adaptation and mitigation strategies designed to maintain stable yields under climate change cannot be properly evaluated. We apply a process-based crop system model that avoids re-initialization bias to demonstrate the importance of simulating both year-to-year and cumulative changes in pre-season soil carbon, nutrient, and water availability. Results are contrasted with simulations using annual re-initialization, and differences are striking. We then demonstrate the potential for the most likely adaptation strategy to offset climate change impacts on yields using continuous simulations through the end of the 21st century. Simulations that annually re-initialize pre-season soil carbon and water contents introduce an inappropriate yield bias that obscures the potential for agricultural management to ameliorate the deleterious effects of rising temperatures and greater rainfall variability.

  4. Are skinfold-based models accurate and suitable for assessing changes in body composition in highly trained athletes?

    PubMed

    Silva, Analiza M; Fields, David A; Quitério, Ana L; Sardinha, Luís B

    2009-09-01

    This study was designed to assess the usefulness of skinfold (SKF) equations developed by Jackson and Pollock (JP) and by Evans (Ev) in tracking body composition changes (relative fat mass [%FM], absolute fat mass [FM], and fat-free mass [FFM]) of elite male judo athletes before a competition using a 4-compartment (4C) model as the reference method. A total of 18 male, top-level (age: 22.6 +/- 2.9 yr) athletes were evaluated at baseline (weight: 73.4 +/- 7.9 kg; %FM4C: 7.0 +/- 3.3%; FM4C: 5.1 +/- 2.6 kg; and FFM4C: 68.3 +/- 7.3 kg) and before a competition (weight: 72.7 +/- 7.5 kg; %FM4C: 6.5 +/- 3.4%; FM4C: 4.8 +/- 2.6 kg; and FFM4C: 67.9 +/- 7.1 kg). Measures of body density assessed by air displacement plethysmography, bone mineral content by dual energy X-ray absorptiometry, and total-body water by bioelectrical impedance spectroscopy were used to estimate 4C model %FM, FM, and FFM. Seven SKF site models using both JP and Ev were used to estimate %FM, FM, and FFM along with the simplified Ev3SKF site. Changes in %FM, FM, and FFM were not significantly different from the 4C model. The regression model for the SKF in question and the reference method did not differ from the line of identity in estimating changes in %FM, FM, and FFM. The limits of agreement were similar, ranging from -3.4 to 3.6 for %FM, -2.7 to 2.5 kg for FM, and -2.5 to 2.7 kg for FFM. Considering the similar performance of both 7SKF- and 3SKF-based equations compared with the criterion method, these data indicate that either the 7- or 3-site SFK models are not valid to detect %FM, FM, and FFM changes of highly trained athletes. These results highlighted the inaccuracy of anthropometric models in tracking desired changes in body composition of elite male judo athletes before a competition.

  5. Mathematical Modeling: A Structured Process

    ERIC Educational Resources Information Center

    Anhalt, Cynthia Oropesa; Cortez, Ricardo

    2015-01-01

    Mathematical modeling, in which students use mathematics to explain or interpret physical, social, or scientific phenomena, is an essential component of the high school curriculum. The Common Core State Standards for Mathematics (CCSSM) classify modeling as a K-12 standard for mathematical practice and as a conceptual category for high school…

  6. Towards diagnosing structural model error

    NASA Astrophysics Data System (ADS)

    Khade, V.; Hansen, J.

    2003-04-01

    One of the contributions towards "model error" emanates from the imperfect values of (constant) parameters in the model. One of the ways to recover the correct parameter value is to treat it as a variable and then use assimilation to find the best estimate of the parameter. Another contribution comes from the imperfect functional dependence of one (or more) of the terms in the model. The term exists both in the model and the system, but the model's functional form is incorrect. In this case searching for the 'best' constant parameter value is the wrong thing to do. The 'best' parameter value will be state dependent. Parametric data assimilation can still be utilized, but with parameter values that are constrained to vary in a sensible manner. Further, relationships between the state and fit parameter values can be exploited to help recover the correct functional dependence.

  7. A Teaching Model for Truss Structures

    ERIC Educational Resources Information Center

    Bigoni, Davide; Dal Corso, Francesco; Misseroni, Diego; Tommasini, Mirko

    2012-01-01

    A classroom demonstration model has been designed, machined and successfully tested in different learning environments to facilitate understanding of the mechanics of truss structures, in which struts are subject to purely axial load and deformation. Gaining confidence with these structures is crucial for the development of lattice models, which…

  8. A teaching model for truss structures

    NASA Astrophysics Data System (ADS)

    Bigoni, Davide; Dal Corso, Francesco; Misseroni, Diego; Tommasini, Mirko

    2012-09-01

    A classroom demonstration model has been designed, machined and successfully tested in different learning environments to facilitate understanding of the mechanics of truss structures, in which struts are subject to purely axial load and deformation. Gaining confidence with these structures is crucial for the development of lattice models, which occur in many fields of physics and engineering.

  9. Structural Equation Modeling of Multivariate Time Series

    ERIC Educational Resources Information Center

    du Toit, Stephen H. C.; Browne, Michael W.

    2007-01-01

    The covariance structure of a vector autoregressive process with moving average residuals (VARMA) is derived. It differs from other available expressions for the covariance function of a stationary VARMA process and is compatible with current structural equation methodology. Structural equation modeling programs, such as LISREL, may therefore be…

  10. A two-parameter kinetic model based on a time-dependent activity coefficient accurately describes enzymatic cellulose digestion

    PubMed Central

    Kostylev, Maxim; Wilson, David

    2014-01-01

    Lignocellulosic biomass is a potential source of renewable, low-carbon-footprint liquid fuels. Biomass recalcitrance and enzyme cost are key challenges associated with the large-scale production of cellulosic fuel. Kinetic modeling of enzymatic cellulose digestion has been complicated by the heterogeneous nature of the substrate and by the fact that a true steady state cannot be attained. We present a two-parameter kinetic model based on the Michaelis-Menten scheme (Michaelis L and Menten ML. (1913) Biochem Z 49:333–369), but with a time-dependent activity coefficient analogous to fractal-like kinetics formulated by Kopelman (Kopelman R. (1988) Science 241:1620–1626). We provide a mathematical derivation and experimental support to show that one of the parameters is a total activity coefficient and the other is an intrinsic constant that reflects the ability of the cellulases to overcome substrate recalcitrance. The model is applicable to individual cellulases and their mixtures at low-to-medium enzyme loads. Using biomass degrading enzymes from a cellulolytic bacterium Thermobifida fusca we show that the model can be used for mechanistic studies of enzymatic cellulose digestion. We also demonstrate that it applies to the crude supernatant of the widely studied cellulolytic fungus Trichoderma reesei and can thus be used to compare cellulases from different organisms. The two parameters may serve a similar role to Vmax, KM, and kcat in classical kinetics. A similar approach may be applicable to other enzymes with heterogeneous substrates and where a steady state is not achievable. PMID:23837567

  11. Rotating Arc Jet Test Model: Time-Accurate Trajectory Heat Flux Replication in a Ground Test Environment

    NASA Technical Reports Server (NTRS)

    Laub, Bernard; Grinstead, Jay; Dyakonov, Artem; Venkatapathy, Ethiraj

    2011-01-01

    Though arc jet testing has been the proven method employed for development testing and certification of TPS and TPS instrumentation, the operational aspects of arc jets limit testing to selected, but constant, conditions. Flight, on the other hand, produces timevarying entry conditions in which the heat flux increases, peaks, and recedes as a vehicle descends through an atmosphere. As a result, we are unable to "test as we fly." Attempts to replicate the time-dependent aerothermal environment of atmospheric entry by varying the arc jet facility operating conditions during a test have proven to be difficult, expensive, and only partially successful. A promising alternative is to rotate the test model exposed to a constant-condition arc jet flow to yield a time-varying test condition at a point on a test article (Fig. 1). The model shape and rotation rate can be engineered so that the heat flux at a point on the model replicates the predicted profile for a particular point on a flight vehicle. This simple concept will enable, for example, calibration of the TPS sensors on the Mars Science Laboratory (MSL) aeroshell for anticipated flight environments.

  12. Wide-range and accurate modeling of linear alkylbenzene sulfonate (LAS) adsorption/desorption on agricultural soil.

    PubMed

    Oliver-Rodríguez, B; Zafra-Gómez, A; Reis, M S; Duarte, B P M; Verge, C; de Ferrer, J A; Pérez-Pascual, M; Vílchez, J L

    2015-11-01

    In this paper, rigorous data and adequate models about linear alkylbenzene sulfonate (LAS) adsorption/desorption on agricultural soil are presented, contributing with a substantial improvement over available adsorption works. The kinetics of the adsorption/desorption phenomenon and the adsorption/desorption equilibrium isotherms were determined through batch studies for total LAS amount and also for each homologue series: C10, C11, C12 and C13. The proposed multiple pseudo-first order kinetic model provides the best fit to the kinetic data, indicating the presence of two adsorption/desorption processes in the general phenomenon. Equilibrium adsorption and desorption data have been properly fitted by a model consisting of a Langmuir plus quadratic term, which provides a good integrated description of the experimental data over a wide range of concentrations. At low concentrations, the Langmuir term explains the adsorption of LAS on soil sites which are highly selective of the n-alkyl groups and cover a very small fraction of the soil surface area, whereas the quadratic term describes adsorption on the much larger part of the soil surface and on LAS retained at moderate to high concentrations. Since adsorption/desorption phenomenon plays a major role in the LAS behavior in soils, relevant conclusions can be drawn from the obtained results. PMID:26070080

  13. Wide-range and accurate modeling of linear alkylbenzene sulfonate (LAS) adsorption/desorption on agricultural soil.

    PubMed

    Oliver-Rodríguez, B; Zafra-Gómez, A; Reis, M S; Duarte, B P M; Verge, C; de Ferrer, J A; Pérez-Pascual, M; Vílchez, J L

    2015-11-01

    In this paper, rigorous data and adequate models about linear alkylbenzene sulfonate (LAS) adsorption/desorption on agricultural soil are presented, contributing with a substantial improvement over available adsorption works. The kinetics of the adsorption/desorption phenomenon and the adsorption/desorption equilibrium isotherms were determined through batch studies for total LAS amount and also for each homologue series: C10, C11, C12 and C13. The proposed multiple pseudo-first order kinetic model provides the best fit to the kinetic data, indicating the presence of two adsorption/desorption processes in the general phenomenon. Equilibrium adsorption and desorption data have been properly fitted by a model consisting of a Langmuir plus quadratic term, which provides a good integrated description of the experimental data over a wide range of concentrations. At low concentrations, the Langmuir term explains the adsorption of LAS on soil sites which are highly selective of the n-alkyl groups and cover a very small fraction of the soil surface area, whereas the quadratic term describes adsorption on the much larger part of the soil surface and on LAS retained at moderate to high concentrations. Since adsorption/desorption phenomenon plays a major role in the LAS behavior in soils, relevant conclusions can be drawn from the obtained results.

  14. Can AERONET data be used to accurately model the monochromatic beam and circumsolar irradiances under cloud-free conditions in desert environment?

    NASA Astrophysics Data System (ADS)

    Eissa, Y.; Blanc, P.; Wald, L.; Ghedira, H.

    2015-12-01

    Routine measurements of the beam irradiance at normal incidence include the irradiance originating from within the extent of the solar disc only (DNIS), whose angular extent is 0.266° ± 1.7 %, and from a larger circumsolar region, called the circumsolar normal irradiance (CSNI). This study investigates whether the spectral aerosol optical properties of the AERONET stations are sufficient for an accurate modelling of the monochromatic DNIS and CSNI under cloud-free conditions in a desert environment. The data from an AERONET station in Abu Dhabi, United Arab Emirates, and the collocated Sun and Aureole Measurement instrument which offers reference measurements of the monochromatic profile of solar radiance were exploited. Using the AERONET data both the radiative transfer models libRadtran and SMARTS offer an accurate estimate of the monochromatic DNIS, with a relative root mean square error (RMSE) of 6 % and a coefficient of determination greater than 0.96. The observed relative bias obtained with libRadtran is +2 %, while that obtained with SMARTS is -1 %. After testing two configurations in SMARTS and three in libRadtran for modelling the monochromatic CSNI, libRadtran exhibits the most accurate results when the AERONET aerosol phase function is presented as a two-term Henyey-Greenstein phase function. In this case libRadtran exhibited a relative RMSE and a bias of respectively 27 and -24 % and a coefficient of determination of 0.882. Therefore, AERONET data may very well be used to model the monochromatic DNIS and the monochromatic CSNI. The results are promising and pave the way towards reporting the contribution of the broadband circumsolar irradiance to standard measurements of the beam irradiance.

  15. Genomic Models of Short-Term Exposure Accurately Predict Long-Term Chemical Carcinogenicity and Identify Putative Mechanisms of Action

    PubMed Central

    Gusenleitner, Daniel; Auerbach, Scott S.; Melia, Tisha; Gómez, Harold F.; Sherr, David H.; Monti, Stefano

    2014-01-01

    Background Despite an overall decrease in incidence of and mortality from cancer, about 40% of Americans will be diagnosed with the disease in their lifetime, and around 20% will die of it. Current approaches to test carcinogenic chemicals adopt the 2-year rodent bioassay, which is costly and time-consuming. As a result, fewer than 2% of the chemicals on the market have actually been tested. However, evidence accumulated to date suggests that gene expression profiles from model organisms exposed to chemical compounds reflect underlying mechanisms of action, and that these toxicogenomic models could be used in the prediction of chemical carcinogenicity. Results In this study, we used a rat-based microarray dataset from the NTP DrugMatrix Database to test the ability of toxicogenomics to model carcinogenicity. We analyzed 1,221 gene-expression profiles obtained from rats treated with 127 well-characterized compounds, including genotoxic and non-genotoxic carcinogens. We built a classifier that predicts a chemical's carcinogenic potential with an AUC of 0.78, and validated it on an independent dataset from the Japanese Toxicogenomics Project consisting of 2,065 profiles from 72 compounds. Finally, we identified differentially expressed genes associated with chemical carcinogenesis, and developed novel data-driven approaches for the molecular characterization of the response to chemical stressors. Conclusion Here, we validate a toxicogenomic approach to predict carcinogenicity and provide strong evidence that, with a larger set of compounds, we should be able to improve the sensitivity and specificity of the predictions. We found that the prediction of carcinogenicity is tissue-dependent and that the results also confirm and expand upon previous studies implicating DNA damage, the peroxisome proliferator-activated receptor, the aryl hydrocarbon receptor, and regenerative pathology in the response to carcinogen exposure. PMID:25058030

  16. Toward Accurate Modelling of Enzymatic Reactions: All Electron Quantum Chemical Analysis combined with QM/MM Calculation of Chorismate Mutase

    SciTech Connect

    Ishida, Toyokazu

    2008-09-17

    To further understand the catalytic role of the protein environment in the enzymatic process, the author has analyzed the reaction mechanism of the Claisen rearrangement of Bacillus subtilis chorismate mutase (BsCM). By introducing a new computational strategy that combines all-electron QM calculations with ab initio QM/MM modelings, it was possible to simulate the molecular interactions between the substrate and the protein environment. The electrostatic nature of the transition state stabilization was characterized by performing all-electron QM calculations based on the fragment molecular orbital technique for the entire enzyme.

  17. Micromechanical models for textile structural composites

    NASA Technical Reports Server (NTRS)

    Marrey, Ramesh V.; Sankar, Bhavani V.

    1995-01-01

    The objective is to develop micromechanical models for predicting the stiffness and strength properties of textile composite materials. Two models are presented to predict the homogeneous elastic constants and coefficients of thermal expansion of a textile composite. The first model is based on rigorous finite element analysis of the textile composite unit-cell. Periodic boundary conditions are enforced between opposite faces of the unit-cell to simulate deformations accurately. The second model implements the selective averaging method (SAM), which is based on a judicious combination of stiffness and compliance averaging. For thin textile composites, both models can predict the plate stiffness coefficients and plate thermal coefficients. The finite element procedure is extended to compute the thermal residual microstresses, and to estimate the initial failure envelope for textile composites.

  18. Model verification of large structural systems. [space shuttle model response

    NASA Technical Reports Server (NTRS)

    Lee, L. T.; Hasselman, T. K.

    1978-01-01

    A computer program for the application of parameter identification on the structural dynamic models of space shuttle and other large models with hundreds of degrees of freedom is described. Finite element, dynamic, analytic, and modal models are used to represent the structural system. The interface with math models is such that output from any structural analysis program applied to any structural configuration can be used directly. Processed data from either sine-sweep tests or resonant dwell tests are directly usable. The program uses measured modal data to condition the prior analystic model so as to improve the frequency match between model and test. A Bayesian estimator generates an improved analytical model and a linear estimator is used in an iterative fashion on highly nonlinear equations. Mass and stiffness scaling parameters are generated for an improved finite element model, and the optimum set of parameters is obtained in one step.

  19. Tree-Structured Digital Organisms Model

    NASA Astrophysics Data System (ADS)

    Suzuki, Teruhiko; Nobesawa, Shiho; Tahara, Ikuo

    Tierra and Avida are well-known models of digital organisms. They describe a life process as a sequence of computation codes. A linear sequence model may not be the only way to describe a digital organism, though it is very simple for a computer-based model. Thus we propose a new digital organism model based on a tree structure, which is rather similar to the generic programming. With our model, a life process is a combination of various functions, as if life in the real world is. This implies that our model can easily describe the hierarchical structure of life, and it can simulate evolutionary computation through mutual interaction of functions. We verified our model by simulations that our model can be regarded as a digital organism model according to its definitions. Our model even succeeded in creating species such as viruses and parasites.

  20. Development of an accurate molecular mechanics model for buckling behavior of multi-walled carbon nanotubes under axial compression.

    PubMed

    Safaei, B; Naseradinmousavi, P; Rahmani, A

    2016-04-01

    In the present paper, an analytical solution based on a molecular mechanics model is developed to evaluate the elastic critical axial buckling strain of chiral multi-walled carbon nanotubes (MWCNTs). To this end, the total potential energy of the system is calculated with the consideration of the both bond stretching and bond angular variations. Density functional theory (DFT) in the form of generalized gradient approximation (GGA) is implemented to evaluate force constants used in the molecular mechanics model. After that, based on the principle of molecular mechanics, explicit expressions are proposed to obtain elastic surface Young's modulus and Poisson's ratio of the single-walled carbon nanotubes corresponding to different types of chirality. Selected numerical results are presented to indicate the influence of the type of chirality, tube diameter, and number of tube walls in detailed. An excellent agreement is found between the present numerical results and those found in the literature which confirms the validity as well as the accuracy of the present closed-form solution. It is found that the value of critical axial buckling strain exhibit significant dependency on the type of chirality and number of tube walls.

  1. Structural sensitivity of biological models revisited.

    PubMed

    Cordoleani, Flora; Flora, Cordoleani; Nerini, David; David, Nerini; Gauduchon, Mathias; Mathias, Gauduchon; Morozov, Andrew; Andrew, Morozov; Poggiale, Jean-Christophe; Jean-Christophe, Poggiale

    2011-08-21

    Enhancing the predictive power of models in biology is a challenging issue. Among the major difficulties impeding model development and implementation are the sensitivity of outcomes to variations in model parameters, the problem of choosing of particular expressions for the parametrization of functional relations, and difficulties in validating models using laboratory data and/or field observations. In this paper, we revisit the phenomenon which is referred to as structural sensitivity of a model. Structural sensitivity arises as a result of the interplay between sensitivity of model outcomes to variations in parameters and sensitivity to the choice of model functions, and this can be somewhat of a bottleneck in improving the models predictive power. We provide a rigorous definition of structural sensitivity and we show how we can quantify the degree of sensitivity of a model based on the Hausdorff distance concept. We propose a simple semi-analytical test of structural sensitivity in an ODE modeling framework. Furthermore, we emphasize the importance of directly linking the variability of field/experimental data and model predictions, and we demonstrate a way of assessing the robustness of modeling predictions with respect to data sampling variability. As an insightful illustrative example, we test our sensitivity analysis methods on a chemostat predator-prey model, where we use laboratory data on the feeding of protozoa to parameterize the predator functional response.

  2. SEMICONDUCTOR INTEGRATED CIRCUITS: Accurate metamodels of device parameters and their applications in performance modeling and optimization of analog integrated circuits

    NASA Astrophysics Data System (ADS)

    Tao, Liang; Xinzhang, Jia; Junfeng, Chen

    2009-11-01

    Techniques for constructing metamodels of device parameters at BSIM3v3 level accuracy are presented to improve knowledge-based circuit sizing optimization. Based on the analysis of the prediction error of analytical performance expressions, operating point driven (OPD) metamodels of MOSFETs are introduced to capture the circuit's characteristics precisely. In the algorithm of metamodel construction, radial basis functions are adopted to interpolate the scattered multivariate data obtained from a well tailored data sampling scheme designed for MOSFETs. The OPD metamodels can be used to automatically bias the circuit at a specific DC operating point. Analytical-based performance expressions composed by the OPD metamodels show obvious improvement for most small-signal performances compared with simulation-based models. Both operating-point variables and transistor dimensions can be optimized in our nesting-loop optimization formulation to maximize design flexibility. The method is successfully applied to a low-voltage low-power amplifier.

  3. Modelling organic crystal structures using distributed multipole and polarizability-based model intermolecular potentials.

    PubMed

    Price, Sarah L; Leslie, Maurice; Welch, Gareth W A; Habgood, Matthew; Price, Louise S; Karamertzanis, Panagiotis G; Day, Graeme M

    2010-08-14

    Crystal structure prediction for organic molecules requires both the fast assessment of thousands to millions of crystal structures and the greatest possible accuracy in their relative energies. We describe a crystal lattice simulation program, DMACRYS, emphasizing the features that make it suitable for use in crystal structure prediction for pharmaceutical molecules using accurate anisotropic atom-atom model intermolecular potentials based on the theory of intermolecular forces. DMACRYS can optimize the lattice energy of a crystal, calculate the second derivative properties, and reduce the symmetry of the spacegroup to move away from a transition state. The calculated terahertz frequency k = 0 rigid-body lattice modes and elastic tensor can be used to estimate free energies. The program uses a distributed multipole electrostatic model (Q, t = 00,...,44s) for the electrostatic fields, and can use anisotropic atom-atom repulsion models, damped isotropic dispersion up to R(-10), as well as a range of empirically fitted isotropic exp-6 atom-atom models with different definitions of atomic types. A new feature is that an accurate model for the induction energy contribution to the lattice energy has been implemented that uses atomic anisotropic dipole polarizability models (alpha, t = (10,10)...(11c,11s)) to evaluate the changes in the molecular charge density induced by the electrostatic field within the crystal. It is demonstrated, using the four polymorphs of the pharmaceutical carbamazepine C(15)H(12)N(2)O, that whilst reproducing crystal structures is relatively easy, calculating the polymorphic energy differences to the accuracy of a few kJ mol(-1) required for applications is very demanding of assumptions made in the mode