Predictive Software Cost Model Study. Volume I. Final Technical Report.
1980-06-01
development phase to identify computer resources necessary to support computer programs after transfer of program manangement responsibility and system... classical model development with refinements specifically applicable to avionics systems. The refinements are the result of the Phase I literature search
PDB_REDO: automated re-refinement of X-ray structure models in the PDB.
Joosten, Robbie P; Salzemann, Jean; Bloch, Vincent; Stockinger, Heinz; Berglund, Ann-Charlott; Blanchet, Christophe; Bongcam-Rudloff, Erik; Combet, Christophe; Da Costa, Ana L; Deleage, Gilbert; Diarena, Matteo; Fabbretti, Roberto; Fettahi, Géraldine; Flegel, Volker; Gisel, Andreas; Kasam, Vinod; Kervinen, Timo; Korpelainen, Eija; Mattila, Kimmo; Pagni, Marco; Reichstadt, Matthieu; Breton, Vincent; Tickle, Ian J; Vriend, Gert
2009-06-01
Structural biology, homology modelling and rational drug design require accurate three-dimensional macromolecular coordinates. However, the coordinates in the Protein Data Bank (PDB) have not all been obtained using the latest experimental and computational methods. In this study a method is presented for automated re-refinement of existing structure models in the PDB. A large-scale benchmark with 16 807 PDB entries showed that they can be improved in terms of fit to the deposited experimental X-ray data as well as in terms of geometric quality. The re-refinement protocol uses TLS models to describe concerted atom movement. The resulting structure models are made available through the PDB_REDO databank (http://www.cmbi.ru.nl/pdb_redo/). Grid computing techniques were used to overcome the computational requirements of this endeavour.
Adaptive Mesh Refinement for Microelectronic Device Design
NASA Technical Reports Server (NTRS)
Cwik, Tom; Lou, John; Norton, Charles
1999-01-01
Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of the spectrum as well as active device simulations that model charge transport and Maxwell's equations will be presented.
Variability of Protein Structure Models from Electron Microscopy.
Monroe, Lyman; Terashi, Genki; Kihara, Daisuke
2017-04-04
An increasing number of biomolecular structures are solved by electron microscopy (EM). However, the quality of structure models determined from EM maps vary substantially. To understand to what extent structure models are supported by information embedded in EM maps, we used two computational structure refinement methods to examine how much structures can be refined using a dataset of 49 maps with accompanying structure models. The extent of structure modification as well as the disagreement between refinement models produced by the two computational methods scaled inversely with the global and the local map resolutions. A general quantitative estimation of deviations of structures for particular map resolutions are provided. Our results indicate that the observed discrepancy between the deposited map and the refined models is due to the lack of structural information present in EM maps and thus these annotations must be used with caution for further applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
COMPUTATIONAL METHODOLOGIES for REAL-SPACE STRUCTURAL REFINEMENT of LARGE MACROMOLECULAR COMPLEXES
Goh, Boon Chong; Hadden, Jodi A.; Bernardi, Rafael C.; Singharoy, Abhishek; McGreevy, Ryan; Rudack, Till; Cassidy, C. Keith; Schulten, Klaus
2017-01-01
The rise of the computer as a powerful tool for model building and refinement has revolutionized the field of structure determination for large biomolecular systems. Despite the wide availability of robust experimental methods capable of resolving structural details across a range of spatiotemporal resolutions, computational hybrid methods have the unique ability to integrate the diverse data from multimodal techniques such as X-ray crystallography and electron microscopy into consistent, fully atomistic structures. Here, commonly employed strategies for computational real-space structural refinement are reviewed, and their specific applications are illustrated for several large macromolecular complexes: ribosome, virus capsids, chemosensory array, and photosynthetic chromatophore. The increasingly important role of computational methods in large-scale structural refinement, along with current and future challenges, is discussed. PMID:27145875
Advanced Computational Methods for High-accuracy Refinement of Protein Low-quality Models
NASA Astrophysics Data System (ADS)
Zang, Tianwu
Predicting the 3-dimentional structure of protein has been a major interest in the modern computational biology. While lots of successful methods can generate models with 3˜5A root-mean-square deviation (RMSD) from the solution, the progress of refining these models is quite slow. It is therefore urgently needed to develop effective methods to bring low-quality models to higher-accuracy ranges (e.g., less than 2 A RMSD). In this thesis, I present several novel computational methods to address the high-accuracy refinement problem. First, an enhanced sampling method, named parallel continuous simulated tempering (PCST), is developed to accelerate the molecular dynamics (MD) simulation. Second, two energy biasing methods, Structure-Based Model (SBM) and Ensemble-Based Model (EBM), are introduced to perform targeted sampling around important conformations. Third, a three-step method is developed to blindly select high-quality models along the MD simulation. These methods work together to make significant refinement of low-quality models without any knowledge of the solution. The effectiveness of these methods is examined in different applications. Using the PCST-SBM method, models with higher global distance test scores (GDT_TS) are generated and selected in the MD simulation of 18 targets from the refinement category of the 10th Critical Assessment of Structure Prediction (CASP10). In addition, in the refinement test of two CASP10 targets using the PCST-EBM method, it is indicated that EBM may bring the initial model to even higher-quality levels. Furthermore, a multi-round refinement protocol of PCST-SBM improves the model quality of a protein to the level that is sufficient high for the molecular replacement in X-ray crystallography. Our results justify the crucial position of enhanced sampling in the protein structure prediction and demonstrate that a considerable improvement of low-accuracy structures is still achievable with current force fields.
The Collaborative Seismic Earth Model: Generation 1
NASA Astrophysics Data System (ADS)
Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner
2018-05-01
We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.
Composite panel development at JPL
NASA Technical Reports Server (NTRS)
Mcelroy, Paul; Helms, Rich
1988-01-01
Parametric computer studies can be use in a cost effective manner to determine optimized composite mirror panel designs. An InterDisciplinary computer Model (IDM) was created to aid in the development of high precision reflector panels for LDR. The materials properties, thermal responses, structural geometries, and radio/optical precision are synergistically analyzed for specific panel designs. Promising panels designs are fabricated and tested so that comparison with panel test results can be used to verify performance prediction models and accommodate design refinement. The iterative approach of computer design and model refinement with performance testing and materials optimization has shown good results for LDR panels.
An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.
1993-01-01
We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.
Epistemic Gameplay and Discovery in Computational Model-Based Inquiry Activities
ERIC Educational Resources Information Center
Wilkerson, Michelle Hoda; Shareff, Rebecca; Laina, Vasiliki; Gravel, Brian
2018-01-01
In computational modeling activities, learners are expected to discover the inner workings of scientific and mathematical systems: First elaborating their understandings of a given system through constructing a computer model, then "debugging" that knowledge by testing and refining the model. While such activities have been shown to…
Hashemi, Sepehr; Armand, Mehran; Gordon, Chad R
2016-10-01
To describe the development and refinement of the computer-assisted planning and execution (CAPE) system for use in face-jaw-teeth transplants (FJTTs). Although successful, some maxillofacial transplants result in suboptimal hybrid occlusion and may require subsequent surgical orthognathic revisions. Unfortunately, the use of traditional dental casts and splints pose several compromising shortcomings in the context of FJTT and hybrid occlusion. Computer-assisted surgery may overcome these challenges. Therefore, the use of computer-assisted orthognathic techniques and functional planning may prevent the need for such revisions and improve facial-skeletal outcomes. A comprehensive CAPE system for use in FJTT was developed through a multicenter collaboration and refined using plastic models, live miniature swine surgery, and human cadaver models. The system marries preoperative surgical planning and intraoperative execution by allowing on-table navigation of the donor fragment relative to recipient cranium, and real-time reporting of patient's cephalometric measurements relative to a desired dental-skeletal outcome. FJTTs using live-animal and cadaveric models demonstrate the CAPE system to be accurate in navigation and beneficial in improving hybrid occlusion and other craniofacial outcomes. Future refinement of the CAPE system includes integration of more commonly performed orthognathic/maxillofacial procedures.
Combining Static Analysis and Model Checking for Software Analysis
NASA Technical Reports Server (NTRS)
Brat, Guillaume; Visser, Willem; Clancy, Daniel (Technical Monitor)
2003-01-01
We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However we show that the process converges to a fired point at which time the partial order information is safe and the whole state space is explored.
Computer simulation of refining process of a high consistency disc refiner based on CFD
NASA Astrophysics Data System (ADS)
Wang, Ping; Yang, Jianwei; Wang, Jiahui
2017-08-01
In order to reduce refining energy consumption, the ANSYS CFX was used to simulate the refining process of a high consistency disc refiner. In the first it was assumed to be uniform Newton fluid of turbulent state in disc refiner with the k-ɛ flow model; then meshed grids and set the boundary conditions in 3-D model of the disc refiner; and then was simulated and analyzed; finally, the viscosity of the pulp were measured. The results show that the CFD method can be used to analyze the pressure and torque on the disc plate, so as to calculate the refining power, and streamlines and velocity vectors can also be observed. CFD simulation can optimize parameters of the bar and groove, which is of great significance to reduce the experimental cost and cycle.
Simplified and refined structural modeling for economical flutter analysis and design
NASA Technical Reports Server (NTRS)
Ricketts, R. H.; Sobieszczanski, J.
1977-01-01
A coordinated use of two finite-element models of different levels of refinement is presented to reduce the computer cost of the repetitive flutter analysis commonly encountered in structural resizing to meet flutter requirements. One model, termed a refined model (RM), represents a high degree of detail needed for strength-sizing and flutter analysis of an airframe. The other model, called a simplified model (SM), has a relatively much smaller number of elements and degrees-of-freedom. A systematic method of deriving an SM from a given RM is described. The method consists of judgmental and numerical operations to make the stiffness and mass of the SM elements equivalent to the corresponding substructures of RM. The structural data are automatically transferred between the two models. The bulk of analysis is performed on the SM with periodical verifications carried out by analysis of the RM. In a numerical example of a supersonic cruise aircraft with an arrow wing, this approach permitted substantial savings in computer costs and acceleration of the job turn-around.
Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.
NASA Astrophysics Data System (ADS)
Liu, Y.; Li, Y.
2016-12-01
We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Dongsheng; Lavender, Curt
2015-05-08
Improving yield strength and asymmetry is critical to expand applications of magnesium alloys in industry for higher fuel efficiency and lower CO 2 production. Grain refinement is an efficient method for strengthening low symmetry magnesium alloys, achievable by precipitate refinement. This study provides guidance on how precipitate engineering will improve mechanical properties through grain refinement. Precipitate refinement for improving yield strengths and asymmetry is simulated quantitatively by coupling a stochastic second phase grain refinement model and a modified polycrystalline crystal viscoplasticity φ-model. Using the stochastic second phase grain refinement model, grain size is quantitatively determined from the precipitate size andmore » volume fraction. Yield strengths, yield asymmetry, and deformation behavior are calculated from the modified φ-model. If the precipitate shape and size remain constant, grain size decreases with increasing precipitate volume fraction. If the precipitate volume fraction is kept constant, grain size decreases with decreasing precipitate size during precipitate refinement. Yield strengths increase and asymmetry approves to one with decreasing grain size, contributed by increasing precipitate volume fraction or decreasing precipitate size.« less
NASA Astrophysics Data System (ADS)
Stockton, Gregory R.
2011-05-01
Over the last 10 years, very large government, military, and commercial computer and data center operators have spent millions of dollars trying to optimally cool data centers as each rack has begun to consume as much as 10 times more power than just a few years ago. In fact, the maximum amount of data computation in a computer center is becoming limited by the amount of available power, space and cooling capacity at some data centers. Tens of millions of dollars and megawatts of power are being annually spent to keep data centers cool. The cooling and air flows dynamically change away from any predicted 3-D computational fluid dynamic modeling during construction and as time goes by, and the efficiency and effectiveness of the actual cooling rapidly departs even farther from predicted models. By using 3-D infrared (IR) thermal mapping and other techniques to calibrate and refine the computational fluid dynamic modeling and make appropriate corrections and repairs, the required power for data centers can be dramatically reduced which reduces costs and also improves reliability.
NASA Astrophysics Data System (ADS)
Torkelson, G. Q.; Stoll, R., II
2017-12-01
Large Eddy Simulation (LES) is a tool commonly used to study the turbulent transport of momentum, heat, and moisture in the Atmospheric Boundary Layer (ABL). For a wide range of ABL LES applications, representing the full range of turbulent length scales in the flow field is a challenge. This is an acute problem in regions of the ABL with strong velocity or scalar gradients, which are typically poorly resolved by standard computational grids (e.g., near the ground surface, in the entrainment zone). Most efforts to address this problem have focused on advanced sub-grid scale (SGS) turbulence model development, or on the use of massive computational resources. While some work exists using embedded meshes, very little has been done on the use of grid refinement. Here, we explore the benefits of grid refinement in a pseudo-spectral LES numerical code. The code utilizes both uniform refinement of the grid in horizontal directions, and stretching of the grid in the vertical direction. Combining the two techniques allows us to refine areas of the flow while maintaining an acceptable grid aspect ratio. In tests that used only refinement of the vertical grid spacing, large grid aspect ratios were found to cause a significant unphysical spike in the stream-wise velocity variance near the ground surface. This was especially problematic in simulations of stably-stratified ABL flows. The use of advanced SGS models was not sufficient to alleviate this issue. The new refinement technique is evaluated using a series of idealized simulation test cases of neutrally and stably stratified ABLs. These test cases illustrate the ability of grid refinement to increase computational efficiency without loss in the representation of statistical features of the flow field.
2010-02-27
investigated in more detail. The intermediate level of fidelity, though more expensive, is then used to refine the analysis , add geometric detail, and...design stage is used to further refine the analysis , narrowing the design to a handful of options. Figure 1. Integrated Hierarchical Framework. In...computational structural and computational fluid modeling. For the structural analysis tool we used McIntosh Structural Dynamics’ finite element code CNEVAL
Bellows flow-induced vibrations
NASA Technical Reports Server (NTRS)
Tygielski, P. J.; Smyly, H. M.; Gerlach, C. R.
1983-01-01
The bellows flow excitation mechanism and results of comprehensive test program are summarized. The analytical model for predicting bellows flow induced stress is refined. The model includes the effects of an upstream elbow, arbitrary geometry, and multiple piles. A refined computer code for predicting flow induced stress is described which allows life prediction if a material S-N diagram is available.
NASA Astrophysics Data System (ADS)
Papoutsakis, Andreas; Sazhin, Sergei S.; Begg, Steven; Danaila, Ionut; Luddens, Francky
2018-06-01
We present an Adaptive Mesh Refinement (AMR) method suitable for hybrid unstructured meshes that allows for local refinement and de-refinement of the computational grid during the evolution of the flow. The adaptive implementation of the Discontinuous Galerkin (DG) method introduced in this work (ForestDG) is based on a topological representation of the computational mesh by a hierarchical structure consisting of oct- quad- and binary trees. Adaptive mesh refinement (h-refinement) enables us to increase the spatial resolution of the computational mesh in the vicinity of the points of interest such as interfaces, geometrical features, or flow discontinuities. The local increase in the expansion order (p-refinement) at areas of high strain rates or vorticity magnitude results in an increase of the order of accuracy in the region of shear layers and vortices. A graph of unitarian-trees, representing hexahedral, prismatic and tetrahedral elements is used for the representation of the initial domain. The ancestral elements of the mesh can be split into self-similar elements allowing each tree to grow branches to an arbitrary level of refinement. The connectivity of the elements, their genealogy and their partitioning are described by linked lists of pointers. An explicit calculation of these relations, presented in this paper, facilitates the on-the-fly splitting, merging and repartitioning of the computational mesh by rearranging the links of each node of the tree with a minimal computational overhead. The modal basis used in the DG implementation facilitates the mapping of the fluxes across the non conformal faces. The AMR methodology is presented and assessed using a series of inviscid and viscous test cases. Also, the AMR methodology is used for the modelling of the interaction between droplets and the carrier phase in a two-phase flow. This approach is applied to the analysis of a spray injected into a chamber of quiescent air, using the Eulerian-Lagrangian approach. This enables us to refine the computational mesh in the vicinity of the droplet parcels and accurately resolve the coupling between the two phases.
MODFLOW-LGR: Practical application to a large regional dataset
NASA Astrophysics Data System (ADS)
Barnes, D.; Coulibaly, K. M.
2011-12-01
In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.
ERIC Educational Resources Information Center
1971
Computers have effected a comprehensive transformation of chemistry. Computers have greatly enhanced the chemist's ability to do model building, simulations, data refinement and reduction, analysis of data in terms of models, on-line data logging, automated control of experiments, quantum chemistry and statistical and mechanical calculations, and…
Refinement Of Hexahedral Cells In Euler Flow Computations
NASA Technical Reports Server (NTRS)
Melton, John E.; Cappuccio, Gelsomina; Thomas, Scott D.
1996-01-01
Topologically Independent Grid, Euler Refinement (TIGER) computer program solves Euler equations of three-dimensional, unsteady flow of inviscid, compressible fluid by numerical integration on unstructured hexahedral coordinate grid refined where necessary to resolve shocks and other details. Hexahedral cells subdivided, each into eight smaller cells, as needed to refine computational grid in regions of high flow gradients. Grid Interactive Refinement and Flow-Field Examination (GIRAFFE) computer program written in conjunction with TIGER program to display computed flow-field data and to assist researcher in verifying specified boundary conditions and refining grid.
Iterative Refinement of a Binding Pocket Model: Active Computational Steering of Lead Optimization
2012-01-01
Computational approaches for binding affinity prediction are most frequently demonstrated through cross-validation within a series of molecules or through performance shown on a blinded test set. Here, we show how such a system performs in an iterative, temporal lead optimization exercise. A series of gyrase inhibitors with known synthetic order formed the set of molecules that could be selected for “synthesis.” Beginning with a small number of molecules, based only on structures and activities, a model was constructed. Compound selection was done computationally, each time making five selections based on confident predictions of high activity and five selections based on a quantitative measure of three-dimensional structural novelty. Compound selection was followed by model refinement using the new data. Iterative computational candidate selection produced rapid improvements in selected compound activity, and incorporation of explicitly novel compounds uncovered much more diverse active inhibitors than strategies lacking active novelty selection. PMID:23046104
Specifying and Refining a Measurement Model for a Computer-Based Interactive Assessment
ERIC Educational Resources Information Center
Levy, Roy; Mislevy, Robert J.
2004-01-01
The challenges of modeling students' performance in computer-based interactive assessments include accounting for multiple aspects of knowledge and skill that arise in different situations and the conditional dependencies among multiple aspects of performance. This article describes a Bayesian approach to modeling and estimating cognitive models…
ERIC Educational Resources Information Center
Reyes-Palomares, Armando; Sanchez-Jimenez, Francisca; Medina, Miguel Angel
2009-01-01
A comprehensive understanding of biological functions requires new systemic perspectives, such as those provided by systems biology. Systems biology approaches are hypothesis-driven and involve iterative rounds of model building, prediction, experimentation, model refinement, and development. Developments in computer science are allowing for ever…
Heo, Lim; Lee, Hasup; Seok, Chaok
2016-08-18
Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex.
Determination of the optimal level for combining area and yield estimates
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Hixson, M. M.; Jobusch, C. D.
1981-01-01
Several levels of obtaining both area and yield estimates of corn and soybeans in Iowa were considered: county, refined strata, refined/split strata, crop reporting district, and state. Using the CCEA model form and smoothed weather data, regression coefficients at each level were derived to compute yield and its variance. Variances were also computed with stratum level. The variance of the yield estimates was largest at the state and smallest at the county level for both crops. The refined strata had somewhat larger variances than those associated with the refined/split strata and CRD. For production estimates, the difference in standard deviations among levels was not large for corn, but for soybeans the standard deviation at the state level was more than 50% greater than for the other levels. The refined strata had the smallest standard deviations. The county level was not considered in evaluation of production estimates due to lack of county area variances.
Stepwise construction of a metabolic network in Event-B: The heat shock response.
Sanwal, Usman; Petre, Luigia; Petre, Ion
2017-12-01
There is a high interest in constructing large, detailed computational models for biological processes. This is often done by putting together existing submodels and adding to them extra details/knowledge. The result of such approaches is usually a model that can only answer questions on a very specific level of detail, and thus, ultimately, is of limited use. We focus instead on an approach to systematically add details to a model, with formal verification of its consistency at each step. In this way, one obtains a set of reusable models, at different levels of abstraction, to be used for different purposes depending on the question to address. We demonstrate this approach using Event-B, a computational framework introduced to develop formal specifications of distributed software systems. We first describe how to model generic metabolic networks in Event-B. Then, we apply this method for modeling the biological heat shock response in eukaryotic cells, using Event-B refinement techniques. The advantage of using Event-B consists in having refinement as an intrinsic feature; this provides as a final result not only a correct model, but a chain of models automatically linked by refinement, each of which is provably correct and reusable. This is a proof-of-concept that refinement in Event-B is suitable for biomodeling, serving for mastering biological complexity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Parallel goal-oriented adaptive finite element modeling for 3D electromagnetic exploration
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.; Ovall, J.; Holst, M.
2014-12-01
We present a parallel goal-oriented adaptive finite element method for accurate and efficient electromagnetic (EM) modeling of complex 3D structures. An unstructured tetrahedral mesh allows this approach to accommodate arbitrarily complex 3D conductivity variations and a priori known boundaries. The total electric field is approximated by the lowest order linear curl-conforming shape functions and the discretized finite element equations are solved by a sparse LU factorization. Accuracy of the finite element solution is achieved through adaptive mesh refinement that is performed iteratively until the solution converges to the desired accuracy tolerance. Refinement is guided by a goal-oriented error estimator that uses a dual-weighted residual method to optimize the mesh for accurate EM responses at the locations of the EM receivers. As a result, the mesh refinement is highly efficient since it only targets the elements where the inaccuracy of the solution corrupts the response at the possibly distant locations of the EM receivers. We compare the accuracy and efficiency of two approaches for estimating the primary residual error required at the core of this method: one uses local element and inter-element residuals and the other relies on solving a global residual system using a hierarchical basis. For computational efficiency our method follows the Bank-Holst algorithm for parallelization, where solutions are computed in subdomains of the original model. To resolve the load-balancing problem, this approach applies a spectral bisection method to divide the entire model into subdomains that have approximately equal error and the same number of receivers. The finite element solutions are then computed in parallel with each subdomain carrying out goal-oriented adaptive mesh refinement independently. We validate the newly developed algorithm by comparison with controlled-source EM solutions for 1D layered models and with 2D results from our earlier 2D goal oriented adaptive refinement code named MARE2DEM. We demonstrate the performance and parallel scaling of this algorithm on a medium-scale computing cluster with a marine controlled-source EM example that includes a 3D array of receivers located over a 3D model that includes significant seafloor bathymetry variations and a heterogeneous subsurface.
Automated Assume-Guarantee Reasoning by Abstraction Refinement
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra
2008-01-01
Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.
Michael Levitt and Computational Biology
molecular structures, compute structural changes, refine experimental structure, model enzyme catalysis and (May 2004) Top Some links on this page may take you to non-federal websites. Their policies may differ
The Collaborative Seismic Earth Model Project
NASA Astrophysics Data System (ADS)
Fichtner, A.; van Herwaarden, D. P.; Afanasiev, M.
2017-12-01
We present the first generation of the Collaborative Seismic Earth Model (CSEM). This effort is intended to address grand challenges in tomography that currently inhibit imaging the Earth's interior across the seismically accessible scales: [1] For decades to come, computational resources will remain insufficient for the exploitation of the full observable seismic bandwidth. [2] With the man power of individual research groups, only small fractions of available waveform data can be incorporated into seismic tomographies. [3] The limited incorporation of prior knowledge on 3D structure leads to slow progress and inefficient use of resources. The CSEM is a multi-scale model of global 3D Earth structure that evolves continuously through successive regional refinements. Taking the current state of the CSEM as initial model, these refinements are contributed by external collaborators, and used to advance the CSEM to the next state. This mode of operation allows the CSEM to [1] harness the distributed man and computing power of the community, [2] to make consistent use of prior knowledge, and [3] to combine different tomographic techniques, needed to cover the seismic data bandwidth. Furthermore, the CSEM has the potential to serve as a unified and accessible representation of tomographic Earth models. Generation 1 comprises around 15 regional tomographic refinements, computed with full-waveform inversion. These include continental-scale mantle models of North America, Australasia, Europe and the South Atlantic, as well as detailed regional models of the crust beneath the Iberian Peninsula and western Turkey. A global-scale full-waveform inversion ensures that regional refinements are consistent with whole-Earth structure. This first generation will serve as the basis for further automation and methodological improvements concerning validation and uncertainty quantification.
A demonstrative model of a lunar base simulation on a personal computer
NASA Technical Reports Server (NTRS)
1985-01-01
The initial demonstration model of a lunar base simulation is described. This initial model was developed on the personal computer level to demonstrate feasibility and technique before proceeding to a larger computer-based model. Lotus Symphony Version 1.1 software was used to base the demonstration model on an personal computer with an MS-DOS operating system. The personal computer-based model determined the applicability of lunar base modeling techniques developed at an LSPI/NASA workshop. In addition, the personnal computer-based demonstration model defined a modeling structure that could be employed on a larger, more comprehensive VAX-based lunar base simulation. Refinement of this personal computer model and the development of a VAX-based model is planned in the near future.
Modeling flow at the nozzle of a solid rocket motor
NASA Technical Reports Server (NTRS)
Chow, Alan S.; Jin, Kang-Ren
1991-01-01
The mechanical behavior of a rocket motor internal flow field results in a system of nonlinear partial differential equations which can be solved numerically. The accuracy and the convergence of the solution of the system of equations depends largely on how precisely the sharp gradients can be resolved. An adaptive grid generation scheme is incorporated into the computer algorithm to enhance the capability of numerical modeling. With this scheme, the grid is refined as the solution evolves. This scheme significantly improves the methodology of solving flow problems in rocket nozzle by putting the refinement part of grid generation into the computer algorithm.
Platania, Chiara Bianca Maria; Salomone, Salvatore; Leggio, Gian Marco; Drago, Filippo; Bucolo, Claudio
2012-01-01
Dopamine (DA) receptors, a class of G-protein coupled receptors (GPCRs), have been targeted for drug development for the treatment of neurological, psychiatric and ocular disorders. The lack of structural information about GPCRs and their ligand complexes has prompted the development of homology models of these proteins aimed at structure-based drug design. Crystal structure of human dopamine D3 (hD3) receptor has been recently solved. Based on the hD3 receptor crystal structure we generated dopamine D2 and D3 receptor models and refined them with molecular dynamics (MD) protocol. Refined structures, obtained from the MD simulations in membrane environment, were subsequently used in molecular docking studies in order to investigate potential sites of interaction. The structure of hD3 and hD2L receptors was differentiated by means of MD simulations and D3 selective ligands were discriminated, in terms of binding energy, by docking calculation. Robust correlation of computed and experimental Ki was obtained for hD3 and hD2L receptor ligands. In conclusion, the present computational approach seems suitable to build and refine structure models of homologous dopamine receptors that may be of value for structure-based drug discovery of selective dopaminergic ligands. PMID:22970199
Liu, Ping; Li, Guodong; Liu, Xinggao; Xiao, Long; Wang, Yalin; Yang, Chunhua; Gui, Weihua
2018-02-01
High quality control method is essential for the implementation of aircraft autopilot system. An optimal control problem model considering the safe aerodynamic envelop is therefore established to improve the control quality of aircraft flight level tracking. A novel non-uniform control vector parameterization (CVP) method with time grid refinement is then proposed for solving the optimal control problem. By introducing the Hilbert-Huang transform (HHT) analysis, an efficient time grid refinement approach is presented and an adaptive time grid is automatically obtained. With this refinement, the proposed method needs fewer optimization parameters to achieve better control quality when compared with uniform refinement CVP method, whereas the computational cost is lower. Two well-known flight level altitude tracking problems and one minimum time cost problem are tested as illustrations and the uniform refinement control vector parameterization method is adopted as the comparative base. Numerical results show that the proposed method achieves better performances in terms of optimization accuracy and computation cost; meanwhile, the control quality is efficiently improved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Fully implicit adaptive mesh refinement algorithm for reduced MHD
NASA Astrophysics Data System (ADS)
Philip, Bobby; Pernice, Michael; Chacon, Luis
2006-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jablonowski, Christiane
The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively withmore » advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project demonstrate significant advances in all six research areas. The major conclusions are that statically-adaptive variable-resolution modeling is currently becoming mature in the climate sciences, and that AMR holds outstanding promise for future-generation weather and climate models on high-performance computing architectures.« less
Reynolds-Averaged Navier-Stokes Solutions to Flat Plate Film Cooling Scenarios
NASA Technical Reports Server (NTRS)
Johnson, Perry L.; Shyam, Vikram; Hah, Chunill
2011-01-01
The predictions of several Reynolds-Averaged Navier-Stokes solutions for a baseline film cooling geometry are analyzed and compared with experimental data. The Fluent finite volume code was used to perform the computations with the realizable k-epsilon turbulence model. The film hole was angled at 35 to the crossflow with a Reynolds number of 17,400. Multiple length-to-diameter ratios (1.75 and 3.5) as well as momentum flux ratios (0.125 and 0.5) were simulated with various domains, boundary conditions, and grid refinements. The coolant to mainstream density ratio was maintained at 2.0 for all scenarios. Computational domain and boundary condition variations show the ability to reduce the computational cost as compared to previous studies. A number of grid refinement and coarsening variations are compared for further insights into the reduction of computational cost. Liberal refinement in the near hole region is valuable, especially for higher momentum jets that tend to lift-off and create a recirculating flow. A lack of proper refinement in the near hole region can severely diminish the accuracy of the solution, even in the far region. The effects of momentum ratio and hole length-to-diameter ratio are also discussed.
Global/local methods for probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Wu, Y.-T.
1993-01-01
A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.
Global/local methods for probabilistic structural analysis
NASA Astrophysics Data System (ADS)
Millwater, H. R.; Wu, Y.-T.
1993-04-01
A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar
2016-01-01
This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.
Statistical Methodologies to Integrate Experimental and Computational Research
NASA Technical Reports Server (NTRS)
Parker, P. A.; Johnson, R. T.; Montgomery, D. C.
2008-01-01
Development of advanced algorithms for simulating engine flow paths requires the integration of fundamental experiments with the validation of enhanced mathematical models. In this paper, we provide an overview of statistical methods to strategically and efficiently conduct experiments and computational model refinement. Moreover, the integration of experimental and computational research efforts is emphasized. With a statistical engineering perspective, scientific and engineering expertise is combined with statistical sciences to gain deeper insights into experimental phenomenon and code development performance; supporting the overall research objectives. The particular statistical methods discussed are design of experiments, response surface methodology, and uncertainty analysis and planning. Their application is illustrated with a coaxial free jet experiment and a turbulence model refinement investigation. Our goal is to provide an overview, focusing on concepts rather than practice, to demonstrate the benefits of using statistical methods in research and development, thereby encouraging their broader and more systematic application.
Model of Silicon Refining During Tapping: Removal of Ca, Al, and Other Selected Element Groups
NASA Astrophysics Data System (ADS)
Olsen, Jan Erik; Kero, Ida T.; Engh, Thorvald A.; Tranell, Gabriella
2017-04-01
A mathematical model for industrial refining of silicon alloys has been developed for the so-called oxidative ladle refining process. It is a lumped (zero-dimensional) model, based on the mass balances of metal, slag, and gas in the ladle, developed to operate with relatively short computational times for the sake of industrial relevance. The model accounts for a semi-continuous process which includes both the tapping and post-tapping refining stages. It predicts the concentrations of Ca, Al, and trace elements, most notably the alkaline metals, alkaline earth metal, and rare earth metals. The predictive power of the model depends on the quality of the model coefficients, the kinetic coefficient, τ, and the equilibrium partition coefficient, L for a given element. A sensitivity analysis indicates that the model results are most sensitive to L. The model has been compared to industrial measurement data and found to be able to qualitatively, and to some extent quantitatively, predict the data. The model is very well suited for alkaline and alkaline earth metals which respond relatively fast to the refining process. The model is less well suited for elements such as the lanthanides and Al, which are refined more slowly. A major challenge for the prediction of the behavior of the rare earth metals is that reliable thermodynamic data for true equilibrium conditions relevant to the industrial process is not typically available in literature.
Grid-size dependence of Cauchy boundary conditions used to simulate stream-aquifer interactions
Mehl, S.; Hill, M.C.
2010-01-01
This work examines the simulation of stream–aquifer interactions as grids are refined vertically and horizontally and suggests that traditional methods for calculating conductance can produce inappropriate values when the grid size is changed. Instead, different grid resolutions require different estimated values. Grid refinement strategies considered include global refinement of the entire model and local refinement of part of the stream. Three methods of calculating the conductance of the Cauchy boundary conditions are investigated. Single- and multi-layer models with narrow and wide streams produced stream leakages that differ by as much as 122% as the grid is refined. Similar results occur for globally and locally refined grids, but the latter required as little as one-quarter the computer execution time and memory and thus are useful for addressing some scale issues of stream–aquifer interactions. Results suggest that existing grid-size criteria for simulating stream–aquifer interactions are useful for one-layer models, but inadequate for three-dimensional models. The grid dependence of the conductance terms suggests that values for refined models using, for example, finite difference or finite-element methods, cannot be determined from previous coarse-grid models or field measurements. Our examples demonstrate the need for a method of obtaining conductances that can be translated to different grid resolutions and provide definitive test cases for investigating alternative conductance formulations.
NASA Astrophysics Data System (ADS)
Apel, M.; Eiken, J.; Hecht, U.
2014-02-01
This paper aims at briefly reviewing phase field models applied to the simulation of heterogeneous nucleation and subsequent growth, with special emphasis on grain refinement by inoculation. The spherical cap and free growth model (e.g. A.L. Greer, et al., Acta Mater. 48, 2823 (2000)) has proven its applicability for different metallic systems, e.g. Al or Mg based alloys, by computing the grain refinement effect achieved by inoculation of the melt with inert seeding particles. However, recent experiments with peritectic Ti-Al-B alloys revealed that the grain refinement by TiB2 is less effective than predicted by the model. Phase field simulations can be applied to validate the approximations of the spherical cap and free growth model, e.g. by computing explicitly the latent heat release associated with different nucleation and growth scenarios. Here, simulation results for point-shaped nucleation, as well as for partially and completely wetted plate-like seed particles will be discussed with respect to recalescence and impact on grain refinement. It will be shown that particularly for large seeding particles (up to 30 μm), the free growth morphology clearly deviates from the assumed spherical cap and the initial growth - until the free growth barrier is reached - significantly contributes to the latent heat release and determines the recalescence temperature.
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.
2015-07-01
This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.
Early differentiation of the Moon: Experimental and modeling studies
NASA Technical Reports Server (NTRS)
Longhi, J.
1986-01-01
Major accomplishments include the mapping out of liquidus boundaries of lunar and meteoritic basalts at low pressure; the refinement of computer models that simulate low pressure fractional crystallization; the development of a computer model to calculate high pressure partial melting of the lunar and Martian interiors; and the proposal of a hypothesis of early lunar differentiation based upon terrestrial analogs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marinov, N.M.; Westbrook, C.K.; Cloutman, L.D.
Work being carried out at LLNL has concentrated on studies of the role of chemical kinetics in a variety of problems related to hydrogen combustion in practical combustion systems, with an emphasis on vehicle propulsion. Use of hydrogen offers significant advantages over fossil fuels, and computer modeling provides advantages when used in concert with experimental studies. Many numerical {open_quotes}experiments{close_quotes} can be carried out quickly and efficiently, reducing the cost and time of system development, and many new and speculative concepts can be screened to identify those with sufficient promise to pursue experimentally. This project uses chemical kinetic and fluid dynamicmore » computational modeling to examine the combustion characteristics of systems burning hydrogen, either as the only fuel or mixed with natural gas. Oxidation kinetics are combined with pollutant formation kinetics, including formation of oxides of nitrogen but also including air toxics in natural gas combustion. We have refined many of the elementary kinetic reaction steps in the detailed reaction mechanism for hydrogen oxidation. To extend the model to pressures characteristic of internal combustion engines, it was necessary to apply theoretical pressure falloff formalisms for several key steps in the reaction mechanism. We have continued development of simplified reaction mechanisms for hydrogen oxidation, we have implemented those mechanisms into multidimensional computational fluid dynamics models, and we have used models of chemistry and fluid dynamics to address selected application problems. At the present time, we are using computed high pressure flame, and auto-ignition data to further refine the simplified kinetics models that are then to be used in multidimensional fluid mechanics models. Detailed kinetics studies have investigated hydrogen flames and ignition of hydrogen behind shock waves, intended to refine the detailed reactions mechanisms.« less
Diffraction-geometry refinement in the DIALS framework
Waterman, David G.; Winter, Graeme; Gildea, Richard J.; ...
2016-03-30
Rapid data collection and modern computing resources provide the opportunity to revisit the task of optimizing the model of diffraction geometry prior to integration. A comprehensive description is given of new software that builds upon established methods by performing a single global refinement procedure, utilizing a smoothly varying model of the crystal lattice where appropriate. This global refinement technique extends to multiple data sets, providing useful constraints to handle the problem of correlated parameters, particularly for small wedges of data. Examples of advanced uses of the software are given and the design is explained in detail, with particular emphasis onmore » the flexibility and extensibility it entails.« less
Coarse Grained Model for Biological Simulations: Recent Refinements and Validation
Vicatos, Spyridon; Rychkova, Anna; Mukherjee, Shayantani; Warshel, Arieh
2014-01-01
Exploring the free energy landscape of proteins and modeling the corresponding functional aspects presents a major challenge for computer simulation approaches. This challenge is due to the complexity of the landscape and the enormous computer time needed for converging simulations. The use of various simplified coarse grained (CG) models offers an effective way of sampling the landscape, but most current models are not expected to give a reliable description of protein stability and functional aspects. The main problem is associated with insufficient focus on the electrostatic features of the model. In this respect our recent CG model offers significant advantage as it has been refined while focusing on its electrostatic free energy. Here we review the current state of our model, describing recent refinement, extensions and validation studies while focusing on demonstrating key applications. These include studies of protein stability, extending the model to include membranes and electrolytes and electrodes as well as studies of voltage activated proteins, protein insertion trough the translocon, the action of molecular motors and even the coupling of the stalled ribosome and the translocon. Our example illustrates the general potential of our approach in overcoming major challenges in studies of structure function correlation in proteins and large macromolecular complexes. PMID:25050439
Progress and Challenges in Coupled Hydrodynamic-Ecological Estuarine Modeling
Numerical modeling has emerged over the last several decades as a widely accepted tool for investigations in environmental sciences. In estuarine research, hydrodynamic and ecological models have moved along parallel tracks with regard to complexity, refinement, computational po...
Large-scale 3D geoelectromagnetic modeling using parallel adaptive high-order finite element method
Grayver, Alexander V.; Kolev, Tzanio V.
2015-11-01
Here, we have investigated the use of the adaptive high-order finite-element method (FEM) for geoelectromagnetic modeling. Because high-order FEM is challenging from the numerical and computational points of view, most published finite-element studies in geoelectromagnetics use the lowest order formulation. Solution of the resulting large system of linear equations poses the main practical challenge. We have developed a fully parallel and distributed robust and scalable linear solver based on the optimal block-diagonal and auxiliary space preconditioners. The solver was found to be efficient for high finite element orders, unstructured and nonconforming locally refined meshes, a wide range of frequencies, largemore » conductivity contrasts, and number of degrees of freedom (DoFs). Furthermore, the presented linear solver is in essence algebraic; i.e., it acts on the matrix-vector level and thus requires no information about the discretization, boundary conditions, or physical source used, making it readily efficient for a wide range of electromagnetic modeling problems. To get accurate solutions at reduced computational cost, we have also implemented goal-oriented adaptive mesh refinement. The numerical tests indicated that if highly accurate modeling results were required, the high-order FEM in combination with the goal-oriented local mesh refinement required less computational time and DoFs than the lowest order adaptive FEM.« less
Large-scale 3D geoelectromagnetic modeling using parallel adaptive high-order finite element method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grayver, Alexander V.; Kolev, Tzanio V.
Here, we have investigated the use of the adaptive high-order finite-element method (FEM) for geoelectromagnetic modeling. Because high-order FEM is challenging from the numerical and computational points of view, most published finite-element studies in geoelectromagnetics use the lowest order formulation. Solution of the resulting large system of linear equations poses the main practical challenge. We have developed a fully parallel and distributed robust and scalable linear solver based on the optimal block-diagonal and auxiliary space preconditioners. The solver was found to be efficient for high finite element orders, unstructured and nonconforming locally refined meshes, a wide range of frequencies, largemore » conductivity contrasts, and number of degrees of freedom (DoFs). Furthermore, the presented linear solver is in essence algebraic; i.e., it acts on the matrix-vector level and thus requires no information about the discretization, boundary conditions, or physical source used, making it readily efficient for a wide range of electromagnetic modeling problems. To get accurate solutions at reduced computational cost, we have also implemented goal-oriented adaptive mesh refinement. The numerical tests indicated that if highly accurate modeling results were required, the high-order FEM in combination with the goal-oriented local mesh refinement required less computational time and DoFs than the lowest order adaptive FEM.« less
2016-01-01
Many excellent methods exist that incorporate cryo-electron microscopy (cryoEM) data to constrain computational protein structure prediction and refinement. Previously, it was shown that iteration of two such orthogonal sampling and scoring methods – Rosetta and molecular dynamics (MD) simulations – facilitated exploration of conformational space in principle. Here, we go beyond a proof-of-concept study and address significant remaining limitations of the iterative MD–Rosetta protein structure refinement protocol. Specifically, all parts of the iterative refinement protocol are now guided by medium-resolution cryoEM density maps, and previous knowledge about the native structure of the protein is no longer necessary. Models are identified solely based on score or simulation time. All four benchmark proteins showed substantial improvement through three rounds of the iterative refinement protocol. The best-scoring final models of two proteins had sub-Ångstrom RMSD to the native structure over residues in secondary structure elements. Molecular dynamics was most efficient in refining secondary structure elements and was thus highly complementary to the Rosetta refinement which is most powerful in refining side chains and loop regions. PMID:25883538
Warren, Jeff; Iversen, Colleen; Brooks, Jonathan; Ricciuto, Daniel
2018-02-13
Using dogwood trees, Oak Ridge National Laboratory researchers are gaining a better understanding of the role photosynthesis and respiration play in the atmospheric carbon dioxide cycle. Their findings will aid computer modelers in improving the accuracy of climate simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si
2014-12-01
The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less
NASA Technical Reports Server (NTRS)
Swisshelm, Julie M.
1989-01-01
An explicit flow solver, applicable to the hierarchy of model equations ranging from Euler to full Navier-Stokes, is combined with several techniques designed to reduce computational expense. The computational domain consists of local grid refinements embedded in a global coarse mesh, where the locations of these refinements are defined by the physics of the flow. Flow characteristics are also used to determine which set of model equations is appropriate for solution in each region, thereby reducing not only the number of grid points at which the solution must be obtained, but also the computational effort required to get that solution. Acceleration to steady-state is achieved by applying multigrid on each of the subgrids, regardless of the particular model equations being solved. Since each of these components is explicit, advantage can readily be taken of the vector- and parallel-processing capabilities of machines such as the Cray X-MP and Cray-2.
An Adaptive Mesh Algorithm: Mesh Structure and Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scannapieco, Anthony J.
2016-06-21
The purpose of Adaptive Mesh Refinement is to minimize spatial errors over the computational space not to minimize the number of computational elements. The additional result of the technique is that it may reduce the number of computational elements needed to retain a given level of spatial accuracy. Adaptive mesh refinement is a computational technique used to dynamically select, over a region of space, a set of computational elements designed to minimize spatial error in the computational model of a physical process. The fundamental idea is to increase the mesh resolution in regions where the physical variables are represented bymore » a broad spectrum of modes in k-space, hence increasing the effective global spectral coverage of those physical variables. In addition, the selection of the spatially distributed elements is done dynamically by cyclically adjusting the mesh to follow the spectral evolution of the system. Over the years three types of AMR schemes have evolved; block, patch and locally refined AMR. In block and patch AMR logical blocks of various grid sizes are overlaid to span the physical space of interest, whereas in locally refined AMR no logical blocks are employed but locally nested mesh levels are used to span the physical space. The distinction between block and patch AMR is that in block AMR the original blocks refine and coarsen entirely in time, whereas in patch AMR the patches change location and zone size with time. The type of AMR described herein is a locally refi ned AMR. In the algorithm described, at any point in physical space only one zone exists at whatever level of mesh that is appropriate for that physical location. The dynamic creation of a locally refi ned computational mesh is made practical by a judicious selection of mesh rules. With these rules the mesh is evolved via a mesh potential designed to concentrate the nest mesh in regions where the physics is modally dense, and coarsen zones in regions where the physics is modally sparse.« less
Satellite SAR geocoding with refined RPC model
NASA Astrophysics Data System (ADS)
Zhang, Lu; Balz, Timo; Liao, Mingsheng
2012-04-01
Recent studies have proved that the Rational Polynomial Camera (RPC) model is able to act as a reliable replacement of the rigorous Range-Doppler (RD) model for the geometric processing of satellite SAR datasets. But its capability in absolute geolocation of SAR images has not been evaluated quantitatively. Therefore, in this article the problems of error analysis and refinement of SAR RPC model are primarily investigated to improve the absolute accuracy of SAR geolocation. Range propagation delay and azimuth timing error are identified as two major error sources for SAR geolocation. An approach based on SAR image simulation and real-to-simulated image matching is developed to estimate and correct these two errors. Afterwards a refined RPC model can be built from the error-corrected RD model and then used in satellite SAR geocoding. Three experiments with different settings are designed and conducted to comprehensively evaluate the accuracies of SAR geolocation with both ordinary and refined RPC models. All the experimental results demonstrate that with RPC model refinement the absolute location accuracies of geocoded SAR images can be improved significantly, particularly in Easting direction. In another experiment the computation efficiencies of SAR geocoding with both RD and RPC models are compared quantitatively. The results show that by using the RPC model such efficiency can be remarkably improved by at least 16 times. In addition the problem of DEM data selection for SAR image simulation in RPC model refinement is studied by a comparative experiment. The results reveal that the best choice should be using the proper DEM datasets of spatial resolution comparable to that of the SAR images.
NASA Technical Reports Server (NTRS)
Anusonti-Inthra, Phuriwat
2010-01-01
A novel Computational Fluid Dynamics (CFD) coupling framework using a conventional Reynolds-Averaged Navier-Stokes (BANS) solver to resolve the near-body flow field and a Particle-based Vorticity Transport Method (PVTM) to predict the evolution of the far field wake is developed, refined, and evaluated for fixed and rotary wing cases. For the rotary wing case, the RANS/PVTM modules are loosely coupled to a Computational Structural Dynamics (CSD) module that provides blade motion and vehicle trim information. The PVTM module is refined by the addition of vortex diffusion, stretching, and reorientation models as well as an efficient memory model. Results from the coupled framework are compared with several experimental data sets (a fixed-wing wind tunnel test and a rotary-wing hover test).
Computational methods for global/local analysis
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.
1992-01-01
Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.
Electron Beam Melting and Refining of Metals: Computational Modeling and Optimization
Vutova, Katia; Donchev, Veliko
2013-01-01
Computational modeling offers an opportunity for a better understanding and investigation of thermal transfer mechanisms. It can be used for the optimization of the electron beam melting process and for obtaining new materials with improved characteristics that have many applications in the power industry, medicine, instrument engineering, electronics, etc. A time-dependent 3D axis-symmetrical heat model for simulation of thermal transfer in metal ingots solidified in a water-cooled crucible at electron beam melting and refining (EBMR) is developed. The model predicts the change in the temperature field in the casting ingot during the interaction of the beam with the material. A modified Pismen-Rekford numerical scheme to discretize the analytical model is developed. These equation systems, describing the thermal processes and main characteristics of the developed numerical method, are presented. In order to optimize the technological regimes, different criteria for better refinement and obtaining dendrite crystal structures are proposed. Analytical problems of mathematical optimization are formulated, discretized and heuristically solved by cluster methods. Using important for the practice simulation results, suggestions can be made for EBMR technology optimization. The proposed tool is important and useful for studying, control, optimization of EBMR process parameters and improving of the quality of the newly produced materials. PMID:28788351
Vertical Scan (V-SCAN) for 3-D Grid Adaptive Mesh Refinement for an atmospheric Model Dynamical Core
NASA Astrophysics Data System (ADS)
Andronova, N. G.; Vandenberg, D.; Oehmke, R.; Stout, Q. F.; Penner, J. E.
2009-12-01
One of the major building blocks of a rigorous representation of cloud evolution in global atmospheric models is a parallel adaptive grid MPI-based communication library (an Adaptive Blocks for Locally Cartesian Topologies library -- ABLCarT), which manages the block-structured data layout, handles ghost cell updates among neighboring blocks and splits a block as refinements occur. The library has several modules that provide a layer of abstraction for adaptive refinement: blocks, which contain individual cells of user data; shells - the global geometry for the problem, including a sphere, reduced sphere, and now a 3D sphere; a load balancer for placement of blocks onto processors; and a communication support layer which encapsulates all data movement. A major performance concern with adaptive mesh refinement is how to represent calculations that have need to be sequenced in a particular order in a direction, such as calculating integrals along a specific path (e.g. atmospheric pressure or geopotential in the vertical dimension). This concern is compounded if the blocks have varying levels of refinement, or are scattered across different processors, as can be the case in parallel computing. In this paper we describe an implementation in ABLCarT of a vertical scan operation, which allows computing along vertical paths in the correct order across blocks transparent to their resolution and processor location. We test this functionality on a 2D and a 3D advection problem, which tests the performance of the model’s dynamics (transport) and physics (sources and sinks) for different model resolutions needed for inclusion of cloud formation.
Polarizable atomic multipole X-ray refinement: application to peptide crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schnieders, Michael J.; Fenn, Timothy D.; Howard Hughes Medical Institute
2009-09-01
A method to accelerate the computation of structure factors from an electron density described by anisotropic and aspherical atomic form factors via fast Fourier transformation is described for the first time. Recent advances in computational chemistry have produced force fields based on a polarizable atomic multipole description of biomolecular electrostatics. In this work, the Atomic Multipole Optimized Energetics for Biomolecular Applications (AMOEBA) force field is applied to restrained refinement of molecular models against X-ray diffraction data from peptide crystals. A new formalism is also developed to compute anisotropic and aspherical structure factors using fast Fourier transformation (FFT) of Cartesian Gaussianmore » multipoles. Relative to direct summation, the FFT approach can give a speedup of more than an order of magnitude for aspherical refinement of ultrahigh-resolution data sets. Use of a sublattice formalism makes the method highly parallelizable. Application of the Cartesian Gaussian multipole scattering model to a series of four peptide crystals using multipole coefficients from the AMOEBA force field demonstrates that AMOEBA systematically underestimates electron density at bond centers. For the trigonal and tetrahedral bonding geometries common in organic chemistry, an atomic multipole expansion through hexadecapole order is required to explain bond electron density. Alternatively, the addition of interatomic scattering (IAS) sites to the AMOEBA-based density captured bonding effects with fewer parameters. For a series of four peptide crystals, the AMOEBA–IAS model lowered R{sub free} by 20–40% relative to the original spherically symmetric scattering model.« less
Pirolli, Peter
2016-08-01
Computational models were developed in the ACT-R neurocognitive architecture to address some aspects of the dynamics of behavior change. The simulations aim to address the day-to-day goal achievement data available from mobile health systems. The models refine current psychological theories of self-efficacy, intended effort, and habit formation, and provide an account for the mechanisms by which goal personalization, implementation intentions, and remindings work.
Molecular dynamics-based refinement and validation for sub-5 Å cryo-electron microscopy maps.
Singharoy, Abhishek; Teo, Ivan; McGreevy, Ryan; Stone, John E; Zhao, Jianhua; Schulten, Klaus
2016-07-07
Two structure determination methods, based on the molecular dynamics flexible fitting (MDFF) paradigm, are presented that resolve sub-5 Å cryo-electron microscopy (EM) maps with either single structures or ensembles of such structures. The methods, denoted cascade MDFF and resolution exchange MDFF, sequentially re-refine a search model against a series of maps of progressively higher resolutions, which ends with the original experimental resolution. Application of sequential re-refinement enables MDFF to achieve a radius of convergence of ~25 Å demonstrated with the accurate modeling of β-galactosidase and TRPV1 proteins at 3.2 Å and 3.4 Å resolution, respectively. The MDFF refinements uniquely offer map-model validation and B-factor determination criteria based on the inherent dynamics of the macromolecules studied, captured by means of local root mean square fluctuations. The MDFF tools described are available to researchers through an easy-to-use and cost-effective cloud computing resource on Amazon Web Services.
FUN3D Grid Refinement and Adaptation Studies for the Ares Launch Vehicle
NASA Technical Reports Server (NTRS)
Bartels, Robert E.; Vasta, Veer; Carlson, Jan-Renee; Park, Mike; Mineck, Raymond E.
2010-01-01
This paper presents grid refinement and adaptation studies performed in conjunction with computational aeroelastic analyses of the Ares crew launch vehicle (CLV). The unstructured grids used in this analysis were created with GridTool and VGRID while the adaptation was performed using the Computational Fluid Dynamic (CFD) code FUN3D with a feature based adaptation software tool. GridTool was developed by ViGYAN, Inc. while the last three software suites were developed by NASA Langley Research Center. The feature based adaptation software used here operates by aligning control volumes with shock and Mach line structures and by refining/de-refining where necessary. It does not redistribute node points on the surface. This paper assesses the sensitivity of the complex flow field about a launch vehicle to grid refinement. It also assesses the potential of feature based grid adaptation to improve the accuracy of CFD analysis for a complex launch vehicle configuration. The feature based adaptation shows the potential to improve the resolution of shocks and shear layers. Further development of the capability to adapt the boundary layer and surface grids of a tetrahedral grid is required for significant improvements in modeling the flow field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritchie, L.T.; Johnson, J.D.; Blond, R.M.
The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.
3D magnetospheric parallel hybrid multi-grid method applied to planet–plasma interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leclercq, L., E-mail: ludivine.leclercq@latmos.ipsl.fr; Modolo, R., E-mail: ronan.modolo@latmos.ipsl.fr; Leblanc, F.
2016-03-15
We present a new method to exploit multiple refinement levels within a 3D parallel hybrid model, developed to study planet–plasma interactions. This model is based on the hybrid formalism: ions are kinetically treated whereas electrons are considered as a inertia-less fluid. Generally, ions are represented by numerical particles whose size equals the volume of the cells. Particles that leave a coarse grid subsequently entering a refined region are split into particles whose volume corresponds to the volume of the refined cells. The number of refined particles created from a coarse particle depends on the grid refinement rate. In order tomore » conserve velocity distribution functions and to avoid calculations of average velocities, particles are not coalesced. Moreover, to ensure the constancy of particles' shape function sizes, the hybrid method is adapted to allow refined particles to move within a coarse region. Another innovation of this approach is the method developed to compute grid moments at interfaces between two refinement levels. Indeed, the hybrid method is adapted to accurately account for the special grid structure at the interfaces, avoiding any overlapping grid considerations. Some fundamental test runs were performed to validate our approach (e.g. quiet plasma flow, Alfven wave propagation). Lastly, we also show a planetary application of the model, simulating the interaction between Jupiter's moon Ganymede and the Jovian plasma.« less
NASA Astrophysics Data System (ADS)
Kirstetter, G.; Popinet, S.; Fullana, J. M.; Lagrée, P. Y.; Josserand, C.
2015-12-01
The full resolution of shallow-water equations for modeling flash floods may have a high computational cost, so that majority of flood simulation softwares used for flood forecasting uses a simplification of this model : 1D approximations, diffusive or kinematic wave approximations or exotic models using non-physical free parameters. These kind of approximations permit to save a lot of computational time by sacrificing in an unquantified way the precision of simulations. To reduce drastically the cost of such 2D simulations by quantifying the lost of precision, we propose a 2D shallow-water flow solver built with the open source code Basilisk1, which is using adaptive refinement on a quadtree grid. This solver uses a well-balanced central-upwind scheme, which is at second order in time and space, and treats the friction and rain terms implicitly in finite volume approach. We demonstrate the validity of our simulation on the case of the flood of Tewkesbury (UK) occurred in July 2007, as shown on Fig. 1. On this case, a systematic study of the impact of the chosen criterium for adaptive refinement is performed. The criterium which has the best computational time / precision ratio is proposed. Finally, we present the power law giving the computational time in respect to the maximum resolution and we show that this law for our 2D simulation is close to the one of 1D simulation, thanks to the fractal dimension of the topography. [1] http://basilisk.fr/
Rodriguez, Brian D.; Sweetkind, Donald S.
2015-01-01
The 3-D inversion was generally able to reproduce the gross resistivity structure of the “known” model, but the simulated conductive volcanic composite unit horizons were often too shallow when compared to the “known” model. Additionally, the chosen computation parameters such as station spacing appear to have resulted in computational artifacts that are difficult to interpret but could potentially be removed with further refinements of the 3-D resistivity inversion modeling technique.
3Drefine: an interactive web server for efficient protein structure refinement
Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin
2016-01-01
3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. PMID:27131371
Dickinson, Jesse; Hanson, R.T.; Mehl, Steffen W.; Hill, Mary C.
2011-01-01
The computer program described in this report, MODPATH-LGR, is designed to allow simulation of particle tracking in locally refined grids. The locally refined grids are simulated by using MODFLOW-LGR, which is based on MODFLOW-2005, the three-dimensional groundwater-flow model published by the U.S. Geological Survey. The documentation includes brief descriptions of the methods used and detailed descriptions of the required input files and how the output files are typically used. The code for this model is available for downloading from the World Wide Web from a U.S. Geological Survey software repository. The repository is accessible from the U.S. Geological Survey Water Resources Information Web page at http://water.usgs.gov/software/ground_water.html. The performance of the MODPATH-LGR program has been tested in a variety of applications. Future applications, however, might reveal errors that were not detected in the test simulations. Users are requested to notify the U.S. Geological Survey of any errors found in this document or the computer program by using the email address available on the Web site. Updates might occasionally be made to this document and to the MODPATH-LGR program, and users should check the Web site periodically.
Fully implicit adaptive mesh refinement MHD algorithm
NASA Astrophysics Data System (ADS)
Philip, Bobby
2005-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.
Numerical Issues for Circulation Control Calculations
NASA Technical Reports Server (NTRS)
Swanson, Roy C., Jr.; Rumsey, Christopher L.
2006-01-01
Steady-state and time-accurate two-dimensional solutions of the compressible Reynolds-averaged Navier- Stokes equations are obtained for flow over the Lockheed circulation control (CC) airfoil and the General Aviation CC (GACC) airfoil. Numerical issues in computing circulation control flows such as the effects of grid resolution, boundary and initial conditions, and unsteadiness are addressed. For the Lockheed CC airfoil computed solutions are compared with detailed experimental data, which include velocity and Reynolds stress profiles. Three turbulence models, having either one or two transport equations, are considered. Solutions are obtained on a sequence of meshes, with mesh refinement primarily concentrated on the airfoil circular trailing edge. Several effects related to mesh refinement are identified. For example, sometimes sufficient mesh resolution can exclude nonphysical solutions, which can occur in CC airfoil calculations. Also, sensitivities of the turbulence models with mesh refinement are discussed. In the case of the GACC airfoil the focus is on the difference between steady-state and time-accurate solutions. A specific objective is to determine if there is self-excited vortex shedding from the jet slot lip.
Algorithm refinement for stochastic partial differential equations: II. Correlated systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, Francis J.; Garcia, Alejandro L.; Tartakovsky, Daniel M.
2005-08-10
We analyze a hybrid particle/continuum algorithm for a hydrodynamic system with long ranged correlations. Specifically, we consider the so-called train model for viscous transport in gases, which is based on a generalization of the random walk process for the diffusion of momentum. This discrete model is coupled with its continuous counterpart, given by a pair of stochastic partial differential equations. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass and momentum conservation. This methodology is an extension of our stochastic Algorithm Refinement (AR) hybrid for simple diffusion [F. Alexander, A. Garcia,more » D. Tartakovsky, Algorithm refinement for stochastic partial differential equations: I. Linear diffusion, J. Comput. Phys. 182 (2002) 47-66]. Results from a variety of numerical experiments are presented for steady-state scenarios. In all cases the mean and variance of density and velocity are captured correctly by the stochastic hybrid algorithm. For a non-stochastic version (i.e., using only deterministic continuum fluxes) the long-range correlations of velocity fluctuations are qualitatively preserved but at reduced magnitude.« less
Theoretical studies of solar lasers and converters
NASA Technical Reports Server (NTRS)
Heinbockel, John H.
1990-01-01
The research described consisted of developing and refining the continuous flow laser model program including the creation of a working model. The mathematical development of a two pass amplifier for an iodine laser is summarized. A computer program for the amplifier's simulation is included with output from the simulation model.
Aeroacoustic Simulations of a Nose Landing Gear with FUN3D: A Grid Refinement Study
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Khorrami, Mehdi R.; Lockard, David P.
2017-01-01
A systematic grid refinement study is presented for numerical simulations of a partially-dressed, cavity-closed (PDCC) nose landing gear configuration that was tested in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D is used to compute the unsteady flow field for this configuration. Mixed-element grids generated using the Pointwise (Registered Trademark) grid generation software are used for numerical simulations. Particular care is taken to ensure quality cells and proper resolution in critical areas of interest in an effort to minimize errors introduced by numerical artifacts. A set of grids was generated in this manner to create a family of uniformly refined grids. The finest grid was then modified to coarsen the wall-normal spacing to create a grid suitable for the wall-function implementation in FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence modeling approach is used for these simulations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. These CFD solutions are used as input to a FfowcsWilliams-Hawkings (FW-H) noise propagation code to compute the farfield noise levels. The agreement of the computed results with the experimental data improves as the grid is refined.
Developing Formal Object-oriented Requirements Specifications: A Model, Tool and Technique.
ERIC Educational Resources Information Center
Jackson, Robert B.; And Others
1995-01-01
Presents a formal object-oriented specification model (OSS) for computer software system development that is supported by a tool that automatically generates a prototype from an object-oriented analysis model (OSA) instance, lets the user examine the prototype, and permits the user to refine the OSA model instance to generate a requirements…
Parallel Adaptive Mesh Refinement Library
NASA Technical Reports Server (NTRS)
Mac-Neice, Peter; Olson, Kevin
2005-01-01
Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.
Methodology of modeling and measuring computer architectures for plasma simulations
NASA Technical Reports Server (NTRS)
Wang, L. P. T.
1977-01-01
A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.
An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks
Cabessa, Jérémie; Villa, Alessandro E. P.
2014-01-01
We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866
An Instructional Merger: HyperCard and the Integrative Teaching Model.
ERIC Educational Resources Information Center
Massie, Carolyn M.; Volk, Larry G.
Teaching methods have been developed and tested that encourage students to process information and refine their thinking skills. The information processing model is known as the Integrative Teaching Model. By combining the computer technology in the HyperCard application for data display and retrieval, instructional delivery of this teaching model…
Molecular dynamics-based refinement and validation for sub-5 Å cryo-electron microscopy maps
Singharoy, Abhishek; Teo, Ivan; McGreevy, Ryan; Stone, John E; Zhao, Jianhua; Schulten, Klaus
2016-01-01
Two structure determination methods, based on the molecular dynamics flexible fitting (MDFF) paradigm, are presented that resolve sub-5 Å cryo-electron microscopy (EM) maps with either single structures or ensembles of such structures. The methods, denoted cascade MDFF and resolution exchange MDFF, sequentially re-refine a search model against a series of maps of progressively higher resolutions, which ends with the original experimental resolution. Application of sequential re-refinement enables MDFF to achieve a radius of convergence of ~25 Å demonstrated with the accurate modeling of β-galactosidase and TRPV1 proteins at 3.2 Å and 3.4 Å resolution, respectively. The MDFF refinements uniquely offer map-model validation and B-factor determination criteria based on the inherent dynamics of the macromolecules studied, captured by means of local root mean square fluctuations. The MDFF tools described are available to researchers through an easy-to-use and cost-effective cloud computing resource on Amazon Web Services. DOI: http://dx.doi.org/10.7554/eLife.16105.001 PMID:27383269
Hierarchical Poly Tree Configurations for the Solution of Dynamically Refined Finte Element Models
NASA Technical Reports Server (NTRS)
Gute, G. D.; Padovan, J.
1993-01-01
This paper demonstrates how a multilevel substructuring technique, called the Hierarchical Poly Tree (HPT), can be used to integrate a localized mesh refinement into the original finite element model more efficiently. The optimal HPT configurations for solving isoparametrically square h-, p-, and hp-extensions on single and multiprocessor computers is derived. In addition, the reduced number of stiffness matrix elements that must be stored when employing this type of solution strategy is quantified. Moreover, the HPT inherently provides localize 'error-trapping' and a logical, efficient means with which to isolate physically anomalous and analytically singular behavior.
Domain decomposition methods in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Gropp, William D.; Keyes, David E.
1991-01-01
The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.
3Drefine: an interactive web server for efficient protein structure refinement.
Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin
2016-07-08
3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Adaptive mesh refinement and front-tracking for shear bands in an antiplane shear model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garaizar, F.X.; Trangenstein, J.
1998-09-01
In this paper the authors describe a numerical algorithm for the study of hear-band formation and growth in a two-dimensional antiplane shear of granular materials. The algorithm combines front-tracking techniques and adaptive mesh refinement. Tracking provides a more careful evolution of the band when coupled with special techniques to advance the ends of the shear band in the presence of a loss of hyperbolicity. The adaptive mesh refinement allows the computational effort to be concentrated in important areas of the deformation, such as the shear band and the elastic relief wave. The main challenges are the problems related to shearmore » bands that extend across several grid patches and the effects that a nonhyperbolic growth rate of the shear bands has in the refinement process. They give examples of the success of the algorithm for various levels of refinement.« less
Implementation of a parallel protein structure alignment service on cloud.
Hung, Che-Lun; Lin, Yaw-Ling
2013-01-01
Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform.
Implementation of a Parallel Protein Structure Alignment Service on Cloud
Hung, Che-Lun; Lin, Yaw-Ling
2013-01-01
Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform. PMID:23671842
Yield Asymmetry Design of Magnesium Alloys by Integrated Computational Materials Engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Dongsheng; Joshi, Vineet V.; Lavender, Curt A.
2013-11-01
Deformation asymmetry of magnesium alloys is an important factor on machine design in automobile industry. Represented by the ratio of compressive yield stress (CYS) against tensile yield stress (TYS), deformation asymmetry is strongly related to microstructure, characterized by texture and grain size. Modified intermediate phi-model, a polycrystalline viscoplasticity model, is used to predict the deformation behavior of magnesium alloys with different grain sizes. Validated with experimental results, integrated computational materials engineering is applied to find out the route in achieving desired asymmetry by thermomechanical processing. In some texture, for example, rolled texture, CYS/TYS is smaller than 1 under different loadingmore » directions. In some texture, for example, extruded texture, asymmetry is large along normal direction. Starting from rolled texture, the asymmetry will increased to close to 1 along rolling direction after compressed to a strain of 0.2. Our model shows that grain refinement increases CYS/TYS. Besides texture control, grain refinement can also optimize the yield asymmetry. After the grain size decreased to a critical value, CYS/TYS reaches to 1 since CYS increases much faster than TYS. By tailoring the microstructure using texture control and grain refinement, it is achievable to optimize yield asymmetry in wrought magnesium alloys.« less
Yield asymmetry design of magnesium alloys by integrated computational materials engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Dongsheng; Joshi, Vineet; Lavender, Curt
2013-11-01
Deformation asymmetry of magnesium alloys is an important factor on machine design in the automobile industry. Represented by the ratio of compressive yield stress (CYS) against tensile yield stress (TYS), deformation asymmetry is strongly related to texture and grain size. A polycrystalline viscoplasticity model, modified intermediate Φ-model, is used to predict the deformation behavior of magnesium alloys with different grain sizes. Validated with experimental results, integrated computational materials engineering is applied to find out the route in achieving desired asymmetry via thermomechanical processing. For example, CYS/TYS in rolled texture is smaller than 1 under different loading directions. In other textures,more » such as extruded texture, CYS/TYS is large along the normal direction. Starting from rolled texture, asymmetry will increase to close to 1 along the rolling direction after being compressed to a strain of 0.2. Our modified Φ-model also shows that grain refinement increases CYS/TYS. Along with texture control, grain refinement also can optimize the yield asymmetry. After the grain size decreases to a critical value, CYS/TYS reaches to 1 because CYS increases much faster than TYS. By tailoring the microstructure using texture control and grain refinement, it is achievable to optimize yield asymmetry in wrought magnesium alloys.« less
On the temperature dependence of H-U{sub iso} in the riding hydrogen model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lübben, Jens; Volkmann, Christian; Grabowsky, Simon
The temperature dependence of hydrogen U{sub iso} and parent U{sub eq} in the riding hydrogen model is investigated by neutron diffraction, aspherical-atom refinements and QM/MM and MO/MO cluster calculations. Fixed values of 1.2 or 1.5 appear to be underestimated, especially at temperatures below 100 K. The temperature dependence of H-U{sub iso} in N-acetyl-l-4-hydroxyproline monohydrate is investigated. Imposing a constant temperature-independent multiplier of 1.2 or 1.5 for the riding hydrogen model is found to be inaccurate, and severely underestimates H-U{sub iso} below 100 K. Neutron diffraction data at temperatures of 9, 150, 200 and 250 K provide benchmark results for thismore » study. X-ray diffraction data to high resolution, collected at temperatures of 9, 30, 50, 75, 100, 150, 200 and 250 K (synchrotron and home source), reproduce neutron results only when evaluated by aspherical-atom refinement models, since these take into account bonding and lone-pair electron density; both invariom and Hirshfeld-atom refinement models enable a more precise determination of the magnitude of H-atom displacements than independent-atom model refinements. Experimental efforts are complemented by computing displacement parameters following the TLS+ONIOM approach. A satisfactory agreement between all approaches is found.« less
A grid-enabled web service for low-resolution crystal structure refinement.
O'Donovan, Daniel J; Stokes-Rees, Ian; Nam, Yunsun; Blacklow, Stephen C; Schröder, Gunnar F; Brunger, Axel T; Sliz, Piotr
2012-03-01
Deformable elastic network (DEN) restraints have proved to be a powerful tool for refining structures from low-resolution X-ray crystallographic data sets. Unfortunately, optimal refinement using DEN restraints requires extensive calculations and is often hindered by a lack of access to sufficient computational resources. The DEN web service presented here intends to provide structural biologists with access to resources for running computationally intensive DEN refinements in parallel on the Open Science Grid, the US cyberinfrastructure. Access to the grid is provided through a simple and intuitive web interface integrated into the SBGrid Science Portal. Using this portal, refinements combined with full parameter optimization that would take many thousands of hours on standard computational resources can now be completed in several hours. An example of the successful application of DEN restraints to the human Notch1 transcriptional complex using the grid resource, and summaries of all submitted refinements, are presented as justification.
Universal Blind Quantum Computation
NASA Astrophysics Data System (ADS)
Fitzsimons, Joseph; Kashefi, Elham
2012-02-01
Blind Quantum Computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client's inputs, outputs and computation remain private. Recently we proposed a universal unconditionally secure BQC scheme, based on the conceptual framework of the measurement-based quantum computing model, where the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. Here we present a refinement of the scheme which vastly expands the class of quantum circuits which can be directly implemented as a blind computation, by introducing a new class of resource states which we term dotted-complete graph states and expanding the set of single qubit states the client is required to prepare. These two modifications significantly simplify the overall protocol and remove the previously present restriction that only nearest-neighbor circuits could be implemented as blind computations directly. As an added benefit, the refined protocol admits a substantially more intuitive and simplified verification mechanism, allowing the correctness of a blind computation to be verified with arbitrarily small probability of error.
Approach for extrapolating in vitro metabolism data to refine bioconcentration factor estimates.
Cowan-Ellsberry, Christina E; Dyer, Scott D; Erhardt, Susan; Bernhard, Mary Jo; Roe, Amy L; Dowty, Martin E; Weisbrod, Annie V
2008-02-01
National and international chemical management programs are assessing thousands of chemicals for their persistence, bioaccumulative and environmental toxic properties; however, data for evaluating the bioaccumulation potential for fish are limited. Computer based models that account for the uptake and elimination processes that contribute to bioaccumulation may help to meet the need for reliable estimates. One critical elimination process of chemicals is metabolic transformation. It has been suggested that in vitro metabolic transformation tests using fish liver hepatocytes or S9 fractions can provide rapid and cost-effective measurements of fish metabolic potential, which could be used to refine bioconcentration factor (BCF) computer model estimates. Therefore, recent activity has focused on developing in vitro methods to measure metabolic transformation in cellular and subcellular fish liver fractions. A method to extrapolate in vitro test data to the whole body metabolic transformation rates is presented that could be used to refine BCF computer model estimates. This extrapolation approach is based on concepts used to determine the fate and distribution of drugs within the human body which have successfully supported the development of new pharmaceuticals for years. In addition, this approach has already been applied in physiologically-based toxicokinetic models for fish. The validity of the in vitro to in vivo extrapolation is illustrated using the rate of loss of parent chemical measured in two independent in vitro test systems: (1) subcellular enzymatic test using the trout liver S9 fraction, and (2) primary hepatocytes isolated from the common carp. The test chemicals evaluated have high quality in vivo BCF values and a range of logK(ow) from 3.5 to 6.7. The results show very good agreement between the measured BCF and estimated BCF values when the extrapolated whole body metabolism rates are included, thus suggesting that in vitro biotransformation data could effectively be used to reduce in vivo BCF testing and refine BCF model estimates. However, additional fish physiological data for parameterization and validation for a wider range of chemicals are needed.
Crowd computing: using competitive dynamics to develop and refine highly predictive models.
Bentzien, Jörg; Muegge, Ingo; Hamner, Ben; Thompson, David C
2013-05-01
A recent application of a crowd computing platform to develop highly predictive in silico models for use in the drug discovery process is described. The platform, Kaggle™, exploits a competitive dynamic that results in model optimization as the competition unfolds. Here, this dynamic is described in detail and compared with more-conventional modeling strategies. The complete and full structure of the underlying dataset is disclosed and some thoughts as to the broader utility of such 'gamification' approaches to the field of modeling are offered. Copyright © 2013 Elsevier Ltd. All rights reserved.
KoBaMIN: a knowledge-based minimization web server for protein structure refinement.
Rodrigues, João P G L M; Levitt, Michael; Chopra, Gaurav
2012-07-01
The KoBaMIN web server provides an online interface to a simple, consistent and computationally efficient protein structure refinement protocol based on minimization of a knowledge-based potential of mean force. The server can be used to refine either a single protein structure or an ensemble of proteins starting from their unrefined coordinates in PDB format. The refinement method is particularly fast and accurate due to the underlying knowledge-based potential derived from structures deposited in the PDB; as such, the energy function implicitly includes the effects of solvent and the crystal environment. Our server allows for an optional but recommended step that optimizes stereochemistry using the MESHI software. The KoBaMIN server also allows comparison of the refined structures with a provided reference structure to assess the changes brought about by the refinement protocol. The performance of KoBaMIN has been benchmarked widely on a large set of decoys, all models generated at the seventh worldwide experiments on critical assessment of techniques for protein structure prediction (CASP7) and it was also shown to produce top-ranking predictions in the refinement category at both CASP8 and CASP9, yielding consistently good results across a broad range of model quality values. The web server is fully functional and freely available at http://csb.stanford.edu/kobamin.
Collection of X-ray diffraction data from macromolecular crystals
Dauter, Zbigniew
2017-01-01
Diffraction data acquisition is the final experimental stage of the crystal structure analysis. All subsequent steps involve mainly computer calculations. Optimally measured and accurate data make the structure solution and refinement easier and lead to more faithful interpretation of the final models. Here, the important factors in data collection from macromolecular crystals are discussed and strategies appropriate for various applications, such as molecular replacement, anomalous phasing, atomic-resolution refinement etc., are presented. Criteria useful for judging the diffraction data quality are also discussed. PMID:28573573
Topological quantum computation of the Dold-Thom functor
NASA Astrophysics Data System (ADS)
Ospina, Juan
2014-05-01
A possible topological quantum computation of the Dold-Thom functor is presented. The method that will be used is the following: a) Certain 1+1-topological quantum field theories valued in symmetric bimonoidal categories are converted into stable homotopical data, using a machinery recently introduced by Elmendorf and Mandell; b) we exploit, in this framework, two recent results (independent of each other) on refinements of Khovanov homology: our refinement into a module over the connective k-theory spectrum and a stronger result by Lipshitz and Sarkar refining Khovanov homology into a stable homotopy type; c) starting from the Khovanov homotopy the Dold-Thom functor is constructed; d) the full construction is formulated as a topological quantum algorithm. It is conjectured that the Jones polynomial can be described as the analytical index of certain Dirac operator defined in the context of the Khovanov homotopy using the Dold-Thom functor. As a line for future research is interesting to study the corresponding supersymmetric model for which the Khovanov-Dirac operator plays the role of a supercharge.
Martinez, N E; Johnson, T E; Pinder, J E
2016-01-01
This study compares three anatomical phantoms for rainbow trout (Oncorhynchus mykiss) for the purpose of estimating organ radiation dose and dose rates from molybdenum-99 ((99)Mo) uptake in the liver and GI tract. Model comparison and refinement is important to the process of determining accurate doses and dose rates to the whole body and the various organs. Accurate and consistent dosimetry is crucial to the determination of appropriate dose-effect relationships for use in environmental risk assessment. The computational phantoms considered are (1) a geometrically defined model employing anatomically relevant organ size and location, (2) voxel reconstruction of internal anatomy obtained from CT imaging, and (3) a new model utilizing NURBS surfaces to refine the model in (2). Dose Conversion Factors (DCFs) for whole body as well as selected organs of O. mykiss were computed using Monte Carlo modeling and combined with empirical models for predicting activity concentration to estimate dose rates and ultimately determine cumulative radiation dose (μGy) to selected organs after several half-lives of (99)Mo. The computational models provided similar results, especially for organs that were both the source and target of radiation (less than 30% difference between all models). Values in the empirical model as well as the 14 day cumulative organ doses determined from (99)Mo uptake are compared to similar models developed previously for (131)I. Finally, consideration is given to treating the GI tract as a solid organ compared to partitioning it into gut contents and GI wall, which resulted in an order of magnitude difference in estimated dose for most organs. Copyright © 2015 Elsevier Ltd. All rights reserved.
Improving virtual screening of G protein-coupled receptors via ligand-directed modeling
Simms, John; Christopoulos, Arthur; Wootten, Denise
2017-01-01
G protein-coupled receptors (GPCRs) play crucial roles in cell physiology and pathophysiology. There is increasing interest in using structural information for virtual screening (VS) of libraries and for structure-based drug design to identify novel agonist or antagonist leads. However, the sparse availability of experimentally determined GPCR/ligand complex structures with diverse ligands impedes the application of structure-based drug design (SBDD) programs directed to identifying new molecules with a select pharmacology. In this study, we apply ligand-directed modeling (LDM) to available GPCR X-ray structures to improve VS performance and selectivity towards molecules of specific pharmacological profile. The described method refines a GPCR binding pocket conformation using a single known ligand for that GPCR. The LDM method is a computationally efficient, iterative workflow consisting of protein sampling and ligand docking. We developed an extensive benchmark comparing LDM-refined binding pockets to GPCR X-ray crystal structures across seven different GPCRs bound to a range of ligands of different chemotypes and pharmacological profiles. LDM-refined models showed improvement in VS performance over origin X-ray crystal structures in 21 out of 24 cases. In all cases, the LDM-refined models had superior performance in enriching for the chemotype of the refinement ligand. This likely contributes to the LDM success in all cases of inhibitor-bound to agonist-bound binding pocket refinement, a key task for GPCR SBDD programs. Indeed, agonist ligands are required for a plethora of GPCRs for therapeutic intervention, however GPCR X-ray structures are mostly restricted to their inactive inhibitor-bound state. PMID:29131821
Computational Psychiatry and the Challenge of Schizophrenia.
Krystal, John H; Murray, John D; Chekroud, Adam M; Corlett, Philip R; Yang, Genevieve; Wang, Xiao-Jing; Anticevic, Alan
2017-05-01
Schizophrenia research is plagued by enormous challenges in integrating and analyzing large datasets and difficulties developing formal theories related to the etiology, pathophysiology, and treatment of this disorder. Computational psychiatry provides a path to enhance analyses of these large and complex datasets and to promote the development and refinement of formal models for features of this disorder. This presentation introduces the reader to the notion of computational psychiatry and describes discovery-oriented and theory-driven applications to schizophrenia involving machine learning, reinforcement learning theory, and biophysically-informed neural circuit models. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center 2017.
Microcomputer pollution model for civilian airports and Air Force bases. Model description
DOE Office of Scientific and Technical Information (OSTI.GOV)
Segal, H.M.; Hamilton, P.L.
1988-08-01
This is one of three reports describing the Emissions and Dispersion Modeling System (EDMS). EDMS is a complex source emissions/dispersion model for use at civilian airports and Air Force bases. It operates in both a refined and a screening mode and is programmed for an IBM-XT (or compatible) computer. This report--MODEL DESCRIPTION--provides the technical description of the model. It first identifies the key design features of both the emissions (EMISSMOD) and dispersion (GIMM) portions of EDMS. It then describes the type of meteorological information the dispersion model can accept and identifies the manner in which it preprocesses National Climatic Centermore » (NCC) data prior to a refined-model run. The report presents the results of running EDMS on a number of different microcomputers and compares EDMS results with those of comparable models. The appendices elaborate on the information noted above and list the source code.« less
ERIC Educational Resources Information Center
Schure, Alexander
A computer-based system model for the monitoring and management of the instructional process was conceived, developed and refined through the techniques of systems analysis. This report describes the various aspects and components of this project in a series of independent and self-contained units. The first unit provides an overview of the entire…
Computational cost of two alternative formulations of Cahn-Hilliard equations
NASA Astrophysics Data System (ADS)
Paszyński, Maciej; Gurgul, Grzegorz; Łoś, Marcin; Szeliga, Danuta
2018-05-01
In this paper we propose two formulations of Cahn-Hilliard equations, which have several applications in cancer growth modeling and material science phase-field simulations. The first formulation uses one C4 partial differential equations (PDEs) the second one uses two C2 PDEs. Finally, we compare the computational costs of direct solvers for both formulations, using the refined isogeometric analysis (rIGA) approach.
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Bennett, Robert M.
1992-01-01
The Computational Aeroelasticity Program-Transonic Small Disturbance (CAP-TSD) code, developed at LaRC, is applied to the active flexible wing wind-tunnel model for prediction of transonic aeroelastic behavior. A semi-span computational model is used for evaluation of symmetric motions, and a full-span model is used for evaluation of antisymmetric motions, and a full-span model is used for evaluation of antisymmetric motions. Static aeroelastic solutions using CAP-TSD are computed. Dynamic deformations are presented as flutter boundaries in terms of Mach number and dynamic pressure. Flutter boundaries that take into account modal refinements, vorticity and entropy corrections, antisymmetric motion, and sensitivity to the modeling of the wing tip ballast stores are also presented with experimental flutter results.
Yeast 5 – an expanded reconstruction of the Saccharomyces cerevisiae metabolic network
2012-01-01
Background Efforts to improve the computational reconstruction of the Saccharomyces cerevisiae biochemical reaction network and to refine the stoichiometrically constrained metabolic models that can be derived from such a reconstruction have continued since the first stoichiometrically constrained yeast genome scale metabolic model was published in 2003. Continuing this ongoing process, we have constructed an update to the Yeast Consensus Reconstruction, Yeast 5. The Yeast Consensus Reconstruction is a product of efforts to forge a community-based reconstruction emphasizing standards compliance and biochemical accuracy via evidence-based selection of reactions. It draws upon models published by a variety of independent research groups as well as information obtained from biochemical databases and primary literature. Results Yeast 5 refines the biochemical reactions included in the reconstruction, particularly reactions involved in sphingolipid metabolism; updates gene-reaction annotations; and emphasizes the distinction between reconstruction and stoichiometrically constrained model. Although it was not a primary goal, this update also improves the accuracy of model prediction of viability and auxotrophy phenotypes and increases the number of epistatic interactions. This update maintains an emphasis on standards compliance, unambiguous metabolite naming, and computer-readable annotations available through a structured document format. Additionally, we have developed MATLAB scripts to evaluate the model’s predictive accuracy and to demonstrate basic model applications such as simulating aerobic and anaerobic growth. These scripts, which provide an independent tool for evaluating the performance of various stoichiometrically constrained yeast metabolic models using flux balance analysis, are included as Additional files 1, 2 and 3. Conclusions Yeast 5 expands and refines the computational reconstruction of yeast metabolism and improves the predictive accuracy of a stoichiometrically constrained yeast metabolic model. It differs from previous reconstructions and models by emphasizing the distinction between the yeast metabolic reconstruction and the stoichiometrically constrained model, and makes both available as Additional file 4 and Additional file 5 and at http://yeast.sf.net/ as separate systems biology markup language (SBML) files. Through this separation, we intend to make the modeling process more accessible, explicit, transparent, and reproducible. PMID:22663945
NASA Astrophysics Data System (ADS)
Penner, Joyce E.; Andronova, Natalia; Oehmke, Robert C.; Brown, Jonathan; Stout, Quentin F.; Jablonowski, Christiane; van Leer, Bram; Powell, Kenneth G.; Herzog, Michael
2007-07-01
One of the most important advances needed in global climate models is the development of atmospheric General Circulation Models (GCMs) that can reliably treat convection. Such GCMs require high resolution in local convectively active regions, both in the horizontal and vertical directions. During previous research we have developed an Adaptive Mesh Refinement (AMR) dynamical core that can adapt its grid resolution horizontally. Our approach utilizes a finite volume numerical representation of the partial differential equations with floating Lagrangian vertical coordinates and requires resolving dynamical processes on small spatial scales. For the latter it uses a newly developed general-purpose library, which facilitates 3D block-structured AMR on spherical grids. The library manages neighbor information as the blocks adapt, and handles the parallel communication and load balancing, freeing the user to concentrate on the scientific modeling aspects of their code. In particular, this library defines and manages adaptive blocks on the sphere, provides user interfaces for interpolation routines and supports the communication and load-balancing aspects for parallel applications. We have successfully tested the library in a 2-D (longitude-latitude) implementation. During the past year, we have extended the library to treat adaptive mesh refinement in the vertical direction. Preliminary results are discussed. This research project is characterized by an interdisciplinary approach involving atmospheric science, computer science and mathematical/numerical aspects. The work is done in close collaboration between the Atmospheric Science, Computer Science and Aerospace Engineering Departments at the University of Michigan and NOAA GFDL.
Complex optimization for big computational and experimental neutron datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Complex optimization for big computational and experimental neutron datasets
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...
2016-11-07
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Four-dimensional world-wide atmospheric models (surface to 25 km altitude)
NASA Technical Reports Server (NTRS)
Spiegler, D. B.; Fowler, M. G.
1972-01-01
Four-dimensional atmospheric models previously developed for use as input to atmospheric attenuation models are evaluated to determine where refinements are warranted. The models are refined where appropriate. A computerized technique is developed that has the unique capability of extracting mean monthly and daily variance profiles of moisture, temperature, density and pressure at 1 km intervals to the height of 25 km for any location on the globe. This capability could be very useful to planners of remote sensing of earth resources missions in that the profiles may be used as input to the attenuation models that predict the expected degradation of the sensor data. Recommendations are given for procedures to use the four-dimensional models in computer mission simulations and for the approach to combining the information provided by the 4-D models with that given by the global models.
NASA Technical Reports Server (NTRS)
Lansing, F. L.; Strain, D. M.; Chai, V. W.; Higgins, S.
1979-01-01
The energy Comsumption Computer Program was developed to simulate building heating and cooling loads and compute thermal and electric energy consumption and cost. This article reports on the new additional algorithms and modifications made in an effort to widen the areas of application. The program structure was rewritten accordingly to refine and advance the building model and to further reduce the processing time and cost. The program is noted for its very low cost and ease of use compared to other available codes. The accuracy of computations is not sacrificed however, since the results are expected to lie within + or - 10% of actual energy meter readings.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Divito, Ben L.; Holloway, C. Michael
1994-01-01
In this paper the design and formal verification of the lower levels of the Reliable Computing Platform (RCP), a fault-tolerant computing system for digital flight control applications, are presented. The RCP uses NMR-style redundancy to mask faults and internal majority voting to flush the effects of transient faults. Two new layers of the RCP hierarchy are introduced: the Minimal Voting refinement (DA_minv) of the Distributed Asynchronous (DA) model and the Local Executive (LE) Model. Both the DA_minv model and the LE model are specified formally and have been verified using the Ehdm verification system. All specifications and proofs are available electronically via the Internet using anonymous FTP or World Wide Web (WWW) access.
Olson, Mark A; Lee, Michael S
2014-01-01
A central problem of computational structural biology is the refinement of modeled protein structures taken from either comparative modeling or knowledge-based methods. Simulations are commonly used to achieve higher resolution of the structures at the all-atom level, yet methodologies that consistently yield accurate results remain elusive. In this work, we provide an assessment of an adaptive temperature-based replica exchange simulation method where the temperature clients dynamically walk in temperature space to enrich their population and exchanges near steep energetic barriers. This approach is compared to earlier work of applying the conventional method of static temperature clients to refine a dataset of conformational decoys. Our results show that, while an adaptive method has many theoretical advantages over a static distribution of client temperatures, only limited improvement was gained from this strategy in excursions of the downhill refinement regime leading to an increase in the fraction of native contacts. To illustrate the sampling differences between the two simulation methods, energy landscapes are presented along with their temperature client profiles.
SFESA: a web server for pairwise alignment refinement by secondary structure shifts.
Tong, Jing; Pei, Jimin; Grishin, Nick V
2015-09-03
Protein sequence alignment is essential for a variety of tasks such as homology modeling and active site prediction. Alignment errors remain the main cause of low-quality structure models. A bioinformatics tool to refine alignments is needed to make protein alignments more accurate. We developed the SFESA web server to refine pairwise protein sequence alignments. Compared to the previous version of SFESA, which required a set of 3D coordinates for a protein, the new server will search a sequence database for the closest homolog with an available 3D structure to be used as a template. For each alignment block defined by secondary structure elements in the template, SFESA evaluates alignment variants generated by local shifts and selects the best-scoring alignment variant. A scoring function that combines the sequence score of profile-profile comparison and the structure score of template-derived contact energy is used for evaluation of alignments. PROMALS pairwise alignments refined by SFESA are more accurate than those produced by current advanced alignment methods such as HHpred and CNFpred. In addition, SFESA also improves alignments generated by other software. SFESA is a web-based tool for alignment refinement, designed for researchers to compute, refine, and evaluate pairwise alignments with a combined sequence and structure scoring of alignment blocks. To our knowledge, the SFESA web server is the only tool that refines alignments by evaluating local shifts of secondary structure elements. The SFESA web server is available at http://prodata.swmed.edu/sfesa.
NASA Technical Reports Server (NTRS)
Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)
1998-01-01
Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.
Solving the scalability issue in quantum-based refinement: Q|R#1.
Zheng, Min; Moriarty, Nigel W; Xu, Yanting; Reimers, Jeffrey R; Afonine, Pavel V; Waller, Mark P
2017-12-01
Accurately refining biomacromolecules using a quantum-chemical method is challenging because the cost of a quantum-chemical calculation scales approximately as n m , where n is the number of atoms and m (≥3) is based on the quantum method of choice. This fundamental problem means that quantum-chemical calculations become intractable when the size of the system requires more computational resources than are available. In the development of the software package called Q|R, this issue is referred to as Q|R#1. A divide-and-conquer approach has been developed that fragments the atomic model into small manageable pieces in order to solve Q|R#1. Firstly, the atomic model of a crystal structure is analyzed to detect noncovalent interactions between residues, and the results of the analysis are represented as an interaction graph. Secondly, a graph-clustering algorithm is used to partition the interaction graph into a set of clusters in such a way as to minimize disruption to the noncovalent interaction network. Thirdly, the environment surrounding each individual cluster is analyzed and any residue that is interacting with a particular cluster is assigned to the buffer region of that particular cluster. A fragment is defined as a cluster plus its buffer region. The gradients for all atoms from each of the fragments are computed, and only the gradients from each cluster are combined to create the total gradients. A quantum-based refinement is carried out using the total gradients as chemical restraints. In order to validate this interaction graph-based fragmentation approach in Q|R, the entire atomic model of an amyloid cross-β spine crystal structure (PDB entry 2oNA) was refined.
ERIC Educational Resources Information Center
Siebler, Frank; Sabelus, Saskia; Bohner, Gerd
2008-01-01
A refined computer paradigm for assessing sexual harassment is presented, validated, and used for testing substantive hypotheses. Male participants were given an opportunity to send sexist jokes to a computer-simulated female chat partner. In Study 1 (N = 44), the harassment measure (number of sexist jokes sent) correlated positively with…
Towards feasible and effective predictive wavefront control for adaptive optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poyneer, L A; Veran, J
We have recently proposed Predictive Fourier Control, a computationally efficient and adaptive algorithm for predictive wavefront control that assumes frozen flow turbulence. We summarize refinements to the state-space model that allow operation with arbitrary computational delays and reduce the computational cost of solving for new control. We present initial atmospheric characterization using observations with Gemini North's Altair AO system. These observations, taken over 1 year, indicate that frozen flow is exists, contains substantial power, and is strongly detected 94% of the time.
Progressive fracture of fiber composites
NASA Technical Reports Server (NTRS)
Irvin, T. B.; Ginty, C. A.
1983-01-01
Refined models and procedures are described for determining progressive composite fracture in graphite/epoxy angleplied laminates. Lewis Research Center capabilities are utilized including the Real Time Ultrasonic C Scan (RUSCAN) experimental facility and the Composite Durability Structural Analysis (CODSTRAN) computer code. The CODSTRAN computer code is used to predict the fracture progression based on composite mechanics, finite element stress analysis, and fracture criteria modules. The RUSCAN facility, CODSTRAN computer code, and scanning electron microscope are used to determine durability and identify failure mechanisms in graphite/epoxy composites.
NASA Astrophysics Data System (ADS)
Shin, H. H.; Zhao, M.; Ming, Y.; Chen, X.; Lin, S. J.
2017-12-01
Surface layer (SL) parameters in atmospheric models - such as 2-m air temperature (T2), 10-m wind speed (U10), and surface turbulent fluxes - are computed by applying the Monin-Obukhov Similarity Theory (MOST) to the lowest model level height (LMH) in the models. The underlying assumption is that LMH is within surface layer height (SLH), but most AGCMs hardly meet the condition in stable boundary layers (SBLs) over land. To assess the errors in modeled SL parameters caused by this, offline computations of the MOST are performed with different LMHs from 1 to 100 m, for an idealized SBL case with prescribed surface parameters (surface temperature, roughness length and Obukhov length), and vertical profiles of temperature and winds. The results show that when LMH is higher than SLH, T2 and U10 are underestimated by O(1 K) and O(1 m/s), respectively, and the biases increase as LMH increases. Based on this, the refined vertical resolution with an additional layer in the SL is applied to the GFDL AGCM, and it reduces the systematic cold biases in T2 and the systematic underestimation of U10.
NASA Astrophysics Data System (ADS)
Lewis, Bryan; Cimbala, John; Wouden, Alex
2011-11-01
Turbulence models are generally developed to study common academic geometries, such as flat plates and channels. Creating quality computational grids for such geometries is trivial, and allows stringent requirements to be met for boundary layer grid refinement. However, engineering applications, such as flow through hydroturbines, require the analysis of complex, highly curved geometries. To produce body-fitted grids for such geometries, the mesh quality requirements must be relaxed. Relaxing these requirements, along with the complexity of rotating flows, forces turbulence models to be employed beyond their developed scope. This study explores the solution sensitivity to boundary layer grid quality for various turbulence models and boundary conditions currently implemented in OpenFOAM. The following models are resented: k-omega, k-omega SST, k-epsilon, realizable k-epsilon, and RNG k-epsilon. Standard wall functions, adaptive wall functions, and sub-grid integration are compared using various grid refinements. The chosen geometry is the GAMM Francis Turbine because experimental data and comparison computational results are available for this turbine. This research was supported by a grant from the DoE and a National Defense Science and Engineering Graduate Fellowship.
A Study of Upgraded Phenolic Curing for RSRM Nozzle Rings
NASA Technical Reports Server (NTRS)
Smartt, Ziba
2000-01-01
A thermochemical cure model for predicting temperature and degree of cure profiles in curing phenolic parts was developed, validated and refined over several years. The model supports optimization of cure cycles and allows input of properties based upon the types of material and the process by which these materials are used to make nozzle components. The model has been refined to use sophisticated computer graphics to demonstrate the changes in temperature and degree of cure during the curing process. The effort discussed in the paper will be the conversion from an outdated solid modeling input program and SINDA analysis code to an integrated solid modeling and analysis package (I-DEAS solid model and TMG). Also discussed will be the incorporation of updated material properties obtained during full scale curing tests into the cure models and the results for all the Reusable Solid Rocket Motor (RSRM) nozzle rings.
The Amber Biomolecular Simulation Programs
CASE, DAVID A.; CHEATHAM, THOMAS E.; DARDEN, TOM; GOHLKE, HOLGER; LUO, RAY; MERZ, KENNETH M.; ONUFRIEV, ALEXEY; SIMMERLING, CARLOS; WANG, BING; WOODS, ROBERT J.
2006-01-01
We describe the development, current features, and some directions for future development of the Amber package of computer programs. This package evolved from a program that was constructed in the late 1970s to do Assisted Model Building with Energy Refinement, and now contains a group of programs embodying a number of powerful tools of modern computational chemistry, focused on molecular dynamics and free energy calculations of proteins, nucleic acids, and carbohydrates. PMID:16200636
Modeling NIF experimental designs with adaptive mesh refinement and Lagrangian hydrodynamics
NASA Astrophysics Data System (ADS)
Koniges, A. E.; Anderson, R. W.; Wang, P.; Gunney, B. T. N.; Becker, R.; Eder, D. C.; MacGowan, B. J.; Schneider, M. B.
2006-06-01
Incorporation of adaptive mesh refinement (AMR) into Lagrangian hydrodynamics algorithms allows for the creation of a highly powerful simulation tool effective for complex target designs with three-dimensional structure. We are developing an advanced modeling tool that includes AMR and traditional arbitrary Lagrangian-Eulerian (ALE) techniques. Our goal is the accurate prediction of vaporization, disintegration and fragmentation in National Ignition Facility (NIF) experimental target elements. Although our focus is on minimizing the generation of shrapnel in target designs and protecting the optics, the general techniques are applicable to modern advanced targets that include three-dimensional effects such as those associated with capsule fill tubes. Several essential computations in ordinary radiation hydrodynamics need to be redesigned in order to allow for AMR to work well with ALE, including algorithms associated with radiation transport. Additionally, for our goal of predicting fragmentation, we include elastic/plastic flow into our computations. We discuss the integration of these effects into a new ALE-AMR simulation code. Applications of this newly developed modeling tool as well as traditional ALE simulations in two and three dimensions are applied to NIF early-light target designs.
A Computational Model of Human Table Tennis for Robot Application
NASA Astrophysics Data System (ADS)
Mülling, Katharina; Peters, Jan
Table tennis is a difficult motor skill which requires all basic components of a general motor skill learning system. In order to get a step closer to such a generic approach to the automatic acquisition and refinement of table tennis, we study table tennis from a human motor control point of view. We make use of the basic models of discrete human movement phases, virtual hitting points, and the operational timing hypothesis. Using these components, we create a computational model which is aimed at reproducing human-like behavior. We verify the functionality of this model in a physically realistic simulation of a Barrett WAM.
Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Leng, W.; Zhong, S.
2008-12-01
In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].
Global/local stress analysis of composite panels
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.; Knight, Norman F., Jr.
1989-01-01
A method for performing a global/local stress analysis is described, and its capabilities are demonstrated. The method employs spline interpolation functions which satisfy the linear plate bending equation to determine displacements and rotations from a global model which are used as boundary conditions for the local model. Then, the local model is analyzed independent of the global model of the structure. This approach can be used to determine local, detailed stress states for specific structural regions using independent, refined local models which exploit information from less-refined global models. The method presented is not restricted to having a priori knowledge of the location of the regions requiring local detailed stress analysis. This approach also reduces the computational effort necessary to obtain the detailed stress state. Criteria for applying the method are developed. The effectiveness of the method is demonstrated using a classical stress concentration problem and a graphite-epoxy blade-stiffened panel with a discontinuous stiffener.
Global/local stress analysis of composite structures. M.S. Thesis
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.
1989-01-01
A method for performing a global/local stress analysis is described and its capabilities are demonstrated. The method employs spline interpolation functions which satisfy the linear plate bending equation to determine displacements and rotations from a global model which are used as boundary conditions for the local model. Then, the local model is analyzed independent of the global model of the structure. This approach can be used to determine local, detailed stress states for specific structural regions using independent, refined local models which exploit information from less-refined global models. The method presented is not restricted to having a priori knowledge of the location of the regions requiring local detailed stress analysis. This approach also reduces the computational effort necessary to obtain the detailed stress state. Criteria for applying the method are developed. The effectiveness of the method is demonstrated using a classical stress concentration problem and a graphite-epoxy blade-stiffened panel with a discontinuous stiffener.
Side impact test and analyses of a DOT-111 tank car : final report.
DOT National Transportation Integrated Search
2015-10-01
Transportation Technology Center, Inc. conducted a side impact test on a DOT-111 tank car to evaluate the performance of the : tank car under dynamic impact conditions and to provide data for the verification and refinement of a computational model. ...
Design, processing and testing of LSI arrays, hybrid microelectronics task
NASA Technical Reports Server (NTRS)
Himmel, R. P.; Stuhlbarg, S. M.; Ravetti, R. G.; Zulueta, P. J.; Rothrock, C. W.
1979-01-01
Mathematical cost models previously developed for hybrid microelectronic subsystems were refined and expanded. Rework terms related to substrate fabrication, nonrecurring developmental and manufacturing operations, and prototype production are included. Sample computer programs were written to demonstrate hybrid microelectric applications of these cost models. Computer programs were generated to calculate and analyze values for the total microelectronics costs. Large scale integrated (LST) chips utilizing tape chip carrier technology were studied. The feasibility of interconnecting arrays of LSU chips utilizing tape chip carrier and semiautomatic wire bonding technology was demonstrated.
NASA Astrophysics Data System (ADS)
Croft, William
2016-03-01
Arbib's computational comparative neuroprimatology [1] is a welcome model for cognitive linguists, that is, linguists who ground their models of language in human cognition and language use in social interaction. Arbib argues that language emerged via biological and cultural coevolution [1]; linguistic knowledge is represented by constructions, and semantic representations of linguistic constructions are grounded in embodied perceptual-motor schemas (the mirror system hypothesis). My comments offer some refinements from a linguistic point of view.
Simulation of an Isolated Tiltrotor in Hover with an Unstructured Overset-Grid RANS Solver
NASA Technical Reports Server (NTRS)
Lee-Rausch, Elizabeth M.; Biedron, Robert T.
2009-01-01
An unstructured overset-grid Reynolds Averaged Navier-Stokes (RANS) solver, FUN3D, is used to simulate an isolated tiltrotor in hover. An overview of the computational method is presented as well as the details of the overset-grid systems. Steady-state computations within a noninertial reference frame define the performance trends of the rotor across a range of the experimental collective settings. Results are presented to show the effects of off-body grid refinement and blade grid refinement. The computed performance and blade loading trends show good agreement with experimental results and previously published structured overset-grid computations. Off-body flow features indicate a significant improvement in the resolution of the first perpendicular blade vortex interaction with background grid refinement across the collective range. Considering experimental data uncertainty and effects of transition, the prediction of figure of merit on the baseline and refined grid is reasonable at the higher collective range- within 3 percent of the measured values. At the lower collective settings, the computed figure of merit is approximately 6 percent lower than the experimental data. A comparison of steady and unsteady results show that with temporal refinement, the dynamic results closely match the steady-state noninertial results which gives confidence in the accuracy of the dynamic overset-grid approach.
Evolutionary Optimization of a Geometrically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
Motion and Stability of Saturated Soil Systems under Dynamic Loading.
1985-04-04
12 7.3 Experimental Verification of Theories ............................. 13 8. ADDITIONAL COMMENTS AND OTHER WORK, AT THE OHIO...theoretical/computational models. The continuing rsearch effort will extend and refine the theoretical models, allow for compressibility of soil as...motion of soil and water and, therefore, a correct theory of liquefaction should not include this assumption. Finite element methodologies have been
Refinement, Reduction, and Replacement of Animal Toxicity Tests by Computational Methods.
Ford, Kevin A
2016-12-01
Widespread public and scientific interest in promoting the care and well-being of animals used for toxicity testing has given rise to improvements in animal welfare practices and views over time, as well as laws and regulations that support means to reduce, refine, and replace animal use (known as the 3Rs) in certain toxicity studies. One way these regulations continue to achieve their aim is by promoting the research, development, and application of alternative testing approaches to characterize potential toxicities either without animals or with minimal use. An important example of an alternative approach is the use of computational toxicology models. Along with the potential capacity to reduce or replace the use of animals for the assessment of particular toxicological endpoints, computational models offer several advantages compared to in vitro and in vivo approaches, including cost-effectiveness, rapid availability of results, and the ability to fully standardize procedures. Pharmaceutical research incorporating the use of computational models has increased steadily over the past 15 years, likely driven by the motivation of companies to screen out toxic compounds in the early stages of development. Models are currently available to aid in the prediction of several important toxicological endpoints, including mutagenicity, carcinogenicity, eye irritation, hepatotoxicity, and skin sensitization, albeit with varying degrees of success. This review serves to introduce the concepts of computational toxicology and evaluate their role in the safety assessment of compounds, while also highlighting the application of in silico methods in the support of the goal and vision of the 3Rs. © The Author 2016. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research.All rights reserved. For permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Heister, Timo; Dannberg, Juliane; Gassmöller, Rene; Bangerth, Wolfgang
2017-08-01
Computations have helped elucidate the dynamics of Earth's mantle for several decades already. The numerical methods that underlie these simulations have greatly evolved within this time span, and today include dynamically changing and adaptively refined meshes, sophisticated and efficient solvers, and parallelization to large clusters of computers. At the same time, many of the methods - discussed in detail in a previous paper in this series - were developed and tested primarily using model problems that lack many of the complexities that are common to the realistic models our community wants to solve today. With several years of experience solving complex and realistic models, we here revisit some of the algorithm designs of the earlier paper and discuss the incorporation of more complex physics. In particular, we re-consider time stepping and mesh refinement algorithms, evaluate approaches to incorporate compressibility, and discuss dealing with strongly varying material coefficients, latent heat, and how to track chemical compositions and heterogeneities. Taken together and implemented in a high-performance, massively parallel code, the techniques discussed in this paper then allow for high resolution, 3-D, compressible, global mantle convection simulations with phase transitions, strongly temperature dependent viscosity and realistic material properties based on mineral physics data.
Wind Farm LES Simulations Using an Overset Methodology
NASA Astrophysics Data System (ADS)
Ananthan, Shreyas; Yellapantula, Shashank
2017-11-01
Accurate simulation of wind farm wakes under realistic atmospheric inflow conditions and complex terrain requires modeling a wide range of length and time scales. The computational domain can span several kilometers while requiring mesh resolutions in O(10-6) to adequately resolve the boundary layer on the blade surface. Overset mesh methodology offers an attractive option to address the disparate range of length scales; it allows embedding body-confirming meshes around turbine geomtries within nested wake capturing meshes of varying resolutions necessary to accurately model the inflow turbulence and the resulting wake structures. Dynamic overset hole-cutting algorithms permit relative mesh motion that allow this nested mesh structure to track unsteady inflow direction changes, turbine control changes (yaw and pitch), and wake propagation. An LES model with overset mesh for localized mesh refinement is used to analyze wind farm wakes and performance and compared with local mesh refinements using non-conformal (hanging node) unstructured meshes. Turbine structures will be modeled using both actuator line approaches and fully-resolved structures to test the efficacy of overset methods for wind farm applications. Exascale Computing Project (ECP), Project Number: 17-SC-20-SC, a collaborative effort of two DOE organizations - the Office of Science and the National Nuclear Security Administration.
Development of a Graduate Course in Computer-Aided Geometric Design.
ERIC Educational Resources Information Center
Ault, Holly K.
1991-01-01
Described is a course that focuses on theory and techniques for ideation and refinement of geometric models used in mechanical engineering design applications. The course objectives, course outline, a description of the facilities, sample exercises, and a discussion of final projects are included. (KR)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun
This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulationsmore » reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.« less
Physics-based method to validate and repair flaws in protein structures
Martin, Osvaldo A.; Arnautova, Yelena A.; Icazatti, Alejandro A.; Scheraga, Harold A.; Vila, Jorge A.
2013-01-01
A method that makes use of information provided by the combination of 13Cα and 13Cβ chemical shifts, computed at the density functional level of theory, enables one to (i) validate, at the residue level, conformations of proteins and detect backbone or side-chain flaws by taking into account an ensemble average of chemical shifts over all of the conformations used to represent a protein, with a sensitivity of ∼90%; and (ii) provide a set of (χ1/χ2) torsional angles that leads to optimal agreement between the observed and computed 13Cα and 13Cβ chemical shifts. The method has been incorporated into the CheShift-2 protein validation Web server. To test the reliability of the provided set of (χ1/χ2) torsional angles, the side chains of all reported conformations of five NMR-determined protein models were refined by a simple routine, without using NOE-based distance restraints. The refinement of each of these five proteins leads to optimal agreement between the observed and computed 13Cα and 13Cβ chemical shifts for ∼94% of the flaws, on average, without introducing a significantly large number of violations of the NOE-based distance restraints for a distance range ≤ 0.5 Ǻ, in which the largest number of distance violations occurs. The results of this work suggest that use of the provided set of (χ1/χ2) torsional angles together with other observables, such as NOEs, should lead to a fast and accurate refinement of the side-chain conformations of protein models. PMID:24082119
Physics-based method to validate and repair flaws in protein structures.
Martin, Osvaldo A; Arnautova, Yelena A; Icazatti, Alejandro A; Scheraga, Harold A; Vila, Jorge A
2013-10-15
A method that makes use of information provided by the combination of (13)C(α) and (13)C(β) chemical shifts, computed at the density functional level of theory, enables one to (i) validate, at the residue level, conformations of proteins and detect backbone or side-chain flaws by taking into account an ensemble average of chemical shifts over all of the conformations used to represent a protein, with a sensitivity of ∼90%; and (ii) provide a set of (χ1/χ2) torsional angles that leads to optimal agreement between the observed and computed (13)C(α) and (13)C(β) chemical shifts. The method has been incorporated into the CheShift-2 protein validation Web server. To test the reliability of the provided set of (χ1/χ2) torsional angles, the side chains of all reported conformations of five NMR-determined protein models were refined by a simple routine, without using NOE-based distance restraints. The refinement of each of these five proteins leads to optimal agreement between the observed and computed (13)C(α) and (13)C(β) chemical shifts for ∼94% of the flaws, on average, without introducing a significantly large number of violations of the NOE-based distance restraints for a distance range ≤ 0.5 , in which the largest number of distance violations occurs. The results of this work suggest that use of the provided set of (χ1/χ2) torsional angles together with other observables, such as NOEs, should lead to a fast and accurate refinement of the side-chain conformations of protein models.
A Refined Cauchy-Schwarz Inequality
ERIC Educational Resources Information Center
Mercer, Peter R.
2007-01-01
The author presents a refinement of the Cauchy-Schwarz inequality. He shows his computations in which refinements of the triangle inequality and its reverse inequality are obtained for nonzero x and y in a normed linear space.
Navier-Stokes calculations for 3D gaseous fuel injection with data comparisons
NASA Technical Reports Server (NTRS)
Fuller, E. J.; Walters, R. W.
1991-01-01
Results from a computational study and experiments designed to further expand the knowledge of gaseous injection into supersonic cross-flows are presented. Experiments performed at Mach 6 included several cases of gaseous helium injection with low transverse angles and injection with low transverse angles coupled with a low yaw angle. Both experimental and computational data confirm that injector yaw has an adverse effect on the helium core decay rate. An array of injectors is found to give higher penetration into the freestream without loss of core injectant decay as compared to a single injector. Lateral diffusion plays a major role in lateral plume spreading, eddy viscosity, injectant plume, and injectant-freestream mixing. Grid refinement makes it possible to capture the gradients in the streamwise direction accurately and to vastly improve the data comparisons. Computational results for a refined grid are found to compare favorably with experimental data on injectant overall and core penetration provided laminar lateral diffusion was taken into account using the modified Baldwin-Lomax turbulence model.
Progress in Computational Simulation of Earthquakes
NASA Technical Reports Server (NTRS)
Donnellan, Andrea; Parker, Jay; Lyzenga, Gregory; Judd, Michele; Li, P. Peggy; Norton, Charles; Tisdale, Edwin; Granat, Robert
2006-01-01
GeoFEST(P) is a computer program written for use in the QuakeSim project, which is devoted to development and improvement of means of computational simulation of earthquakes. GeoFEST(P) models interacting earthquake fault systems from the fault-nucleation to the tectonic scale. The development of GeoFEST( P) has involved coupling of two programs: GeoFEST and the Pyramid Adaptive Mesh Refinement Library. GeoFEST is a message-passing-interface-parallel code that utilizes a finite-element technique to simulate evolution of stress, fault slip, and plastic/elastic deformation in realistic materials like those of faulted regions of the crust of the Earth. The products of such simulations are synthetic observable time-dependent surface deformations on time scales from days to decades. Pyramid Adaptive Mesh Refinement Library is a software library that facilitates the generation of computational meshes for solving physical problems. In an application of GeoFEST(P), a computational grid can be dynamically adapted as stress grows on a fault. Simulations on workstations using a few tens of thousands of stress and displacement finite elements can now be expanded to multiple millions of elements with greater than 98-percent scaled efficiency on over many hundreds of parallel processors (see figure).
Anisotropic mesh adaptation for marine ice-sheet modelling
NASA Astrophysics Data System (ADS)
Gillet-Chaulet, Fabien; Tavard, Laure; Merino, Nacho; Peyaud, Vincent; Brondex, Julien; Durand, Gael; Gagliardini, Olivier
2017-04-01
Improving forecasts of ice-sheets contribution to sea-level rise requires, amongst others, to correctly model the dynamics of the grounding line (GL), i.e. the line where the ice detaches from its underlying bed and goes afloat on the ocean. Many numerical studies, including the intercomparison exercises MISMIP and MISMIP3D, have shown that grid refinement in the GL vicinity is a key component to obtain reliable results. Improving model accuracy while maintaining the computational cost affordable has then been an important target for the development of marine icesheet models. Adaptive mesh refinement (AMR) is a method where the accuracy of the solution is controlled by spatially adapting the mesh size. It has become popular in models using the finite element method as they naturally deal with unstructured meshes, but block-structured AMR has also been successfully applied to model GL dynamics. The main difficulty with AMR is to find efficient and reliable estimators of the numerical error to control the mesh size. Here, we use the estimator proposed by Frey and Alauzet (2015). Based on the interpolation error, it has been found effective in practice to control the numerical error, and has some flexibility, such as its ability to combine metrics for different variables, that makes it attractive. Routines to compute the anisotropic metric defining the mesh size have been implemented in the finite element ice flow model Elmer/Ice (Gagliardini et al., 2013). The mesh adaptation is performed using the freely available library MMG (Dapogny et al., 2014) called from Elmer/Ice. Using a setup based on the inter-comparison exercise MISMIP+ (Asay-Davis et al., 2016), we study the accuracy of the solution when the mesh is adapted using various variables (ice thickness, velocity, basal drag, …). We show that combining these variables allows to reduce the number of mesh nodes by more than one order of magnitude, for the same numerical accuracy, when compared to uniform mesh refinement. For transient solutions where the GL is moving, we have implemented an algorithm where the computation is reiterated allowing to anticipate the GL displacement and to adapt the mesh to the transient solution. We discuss the performance and robustness of this algorithm.
NASA Astrophysics Data System (ADS)
Yuan, H. Z.; Wang, Y.; Shu, C.
2017-12-01
This paper presents an adaptive mesh refinement-multiphase lattice Boltzmann flux solver (AMR-MLBFS) for effective simulation of complex binary fluid flows at large density ratios. In this method, an AMR algorithm is proposed by introducing a simple indicator on the root block for grid refinement and two possible statuses for each block. Unlike available block-structured AMR methods, which refine their mesh by spawning or removing four child blocks simultaneously, the present method is able to refine its mesh locally by spawning or removing one to four child blocks independently when the refinement indicator is triggered. As a result, the AMR mesh used in this work can be more focused on the flow region near the phase interface and its size is further reduced. In each block of mesh, the recently proposed MLBFS is applied for the solution of the flow field and the level-set method is used for capturing the fluid interface. As compared with existing AMR-lattice Boltzmann models, the present method avoids both spatial and temporal interpolations of density distribution functions so that converged solutions on different AMR meshes and uniform grids can be obtained. The proposed method has been successfully validated by simulating a static bubble immersed in another fluid, a falling droplet, instabilities of two-layered fluids, a bubble rising in a box, and a droplet splashing on a thin film with large density ratios and high Reynolds numbers. Good agreement with the theoretical solution, the uniform-grid result, and/or the published data has been achieved. Numerical results also show its effectiveness in saving computational time and virtual memory as compared with computations on uniform meshes.
Zhu, Tong; Zhang, John Z H; He, Xiao
2014-09-14
In this work, protein side chain (1)H chemical shifts are used as probes to detect and correct side-chain packing errors in protein's NMR structures through structural refinement. By applying the automated fragmentation quantum mechanics/molecular mechanics (AF-QM/MM) method for ab initio calculation of chemical shifts, incorrect side chain packing was detected in the NMR structures of the Pin1 WW domain. The NMR structure is then refined by using molecular dynamics simulation and the polarized protein-specific charge (PPC) model. The computationally refined structure of the Pin1 WW domain is in excellent agreement with the corresponding X-ray structure. In particular, the use of the PPC model yields a more accurate structure than that using the standard (nonpolarizable) force field. For comparison, some of the widely used empirical models for chemical shift calculations are unable to correctly describe the relationship between the particular proton chemical shift and protein structures. The AF-QM/MM method can be used as a powerful tool for protein NMR structure validation and structural flaw detection.
Extracting falsifiable predictions from sloppy models.
Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P
2007-12-01
Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.
Hearing the shape of the Ising model with a programmable superconducting-flux annealer.
Vinci, Walter; Markström, Klas; Boixo, Sergio; Roy, Aidan; Spedalieri, Federico M; Warburton, Paul A; Severini, Simone
2014-07-16
Two objects can be distinguished if they have different measurable properties. Thus, distinguishability depends on the Physics of the objects. In considering graphs, we revisit the Ising model as a framework to define physically meaningful spectral invariants. In this context, we introduce a family of refinements of the classical spectrum and consider the quantum partition function. We demonstrate that the energy spectrum of the quantum Ising Hamiltonian is a stronger invariant than the classical one without refinements. For the purpose of implementing the related physical systems, we perform experiments on a programmable annealer with superconducting flux technology. Departing from the paradigm of adiabatic computation, we take advantage of a noisy evolution of the device to generate statistics of low energy states. The graphs considered in the experiments have the same classical partition functions, but different quantum spectra. The data obtained from the annealer distinguish non-isomorphic graphs via information contained in the classical refinements of the functions but not via the differences in the quantum spectra.
ERIC Educational Resources Information Center
Schaffhauser, Dian
2012-01-01
This article features a major statewide initiative in North Carolina that is showing how a consortium model can minimize risks for districts and help them exploit the advantages of cloud computing. Edgecombe County Public Schools in Tarboro, North Carolina, intends to exploit a major cloud initiative being refined in the state and involving every…
A computational cognitive model of self-efficacy and daily adherence in mHealth.
Pirolli, Peter
2016-12-01
Mobile health (mHealth) applications provide an excellent opportunity for collecting rich, fine-grained data necessary for understanding and predicting day-to-day health behavior change dynamics. A computational predictive model (ACT-R-DStress) is presented and fit to individual daily adherence in 28-day mHealth exercise programs. The ACT-R-DStress model refines the psychological construct of self-efficacy. To explain and predict the dynamics of self-efficacy and predict individual performance of targeted behaviors, the self-efficacy construct is implemented as a theory-based neurocognitive simulation of the interaction of behavioral goals, memories of past experiences, and behavioral performance.
Investigation of supersonic jet plumes using an improved two-equation turbulence model
NASA Technical Reports Server (NTRS)
Lakshmanan, B.; Abdol-Hamid, Khaled S.
1994-01-01
Supersonic jet plumes were studied using a two-equation turbulence model employing corrections for compressible dissipation and pressure-dilatation. A space-marching procedure based on an upwind numerical scheme was used to solve the governing equations and turbulence transport equations. The computed results indicate that two-equation models employing corrections for compressible dissipation and pressure-dilatation yield improved agreement with the experimental data. In addition, the numerical study demonstrates that the computed results are sensitive to the effect of grid refinement and insensitive to the type of velocity profiles used at the inflow boundary for the cases considered in the present study.
Minimum-complexity helicopter simulation math model
NASA Technical Reports Server (NTRS)
Heffley, Robert K.; Mnich, Marc A.
1988-01-01
An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.
Millán, Claudia; Sammito, Massimo Domenico; McCoy, Airlie J; Nascimento, Andrey F Ziem; Petrillo, Giovanna; Oeffner, Robert D; Domínguez-Gil, Teresa; Hermoso, Juan A; Read, Randy J; Usón, Isabel
2018-04-01
Macromolecular structures can be solved by molecular replacement provided that suitable search models are available. Models from distant homologues may deviate too much from the target structure to succeed, notwithstanding an overall similar fold or even their featuring areas of very close geometry. Successful methods to make the most of such templates usually rely on the degree of conservation to select and improve search models. ARCIMBOLDO_SHREDDER uses fragments derived from distant homologues in a brute-force approach driven by the experimental data, instead of by sequence similarity. The new algorithms implemented in ARCIMBOLDO_SHREDDER are described in detail, illustrating its characteristic aspects in the solution of new and test structures. In an advance from the previously published algorithm, which was based on omitting or extracting contiguous polypeptide spans, model generation now uses three-dimensional volumes respecting structural units. The optimal fragment size is estimated from the expected log-likelihood gain (LLG) values computed assuming that a substructure can be found with a level of accuracy near that required for successful extension of the structure, typically below 0.6 Å root-mean-square deviation (r.m.s.d.) from the target. Better sampling is attempted through model trimming or decomposition into rigid groups and optimization through Phaser's gyre refinement. Also, after model translation, packing filtering and refinement, models are either disassembled into predetermined rigid groups and refined (gimble refinement) or Phaser's LLG-guided pruning is used to trim the model of residues that are not contributing signal to the LLG at the target r.m.s.d. value. Phase combination among consistent partial solutions is performed in reciprocal space with ALIXE. Finally, density modification and main-chain autotracing in SHELXE serve to expand to the full structure and identify successful solutions. The performance on test data and the solution of new structures are described.
NASA Technical Reports Server (NTRS)
Thompson, D.; Mogili, P.; Chalasani, S.; Addy, H.; Choo, Y.
2004-01-01
Steady-state solutions of the Reynolds-averaged Navier-Stokes (RANS) equations were computed using the Colbalt flow solver for a constant-section, rectangular wing based on an extruded two-dimensional glaze ice shape. The one equation Spalart-Allmaras turbulence model was used. The results were compared with data obtained from a recent wind tunnel test. Computed results indicate that the steady RANS solutions do not accurately capture the recirculating region downstream of the ice accretion, even after a mesh refinement. The resulting predicted reattachment is farther downstream than indicated by the experimental data. Additionally, the solutions computed on a relatively coarse baseline mesh had detailed flow characteristics that were different from those computed on the refined mesh or the experimental data. Steady RANS solutions were also computed to investigate the effects of spanwise variation in the ice shape. The spanwise variation was obtained via a bleeding function that merged the ice shape with the clean wing using a sinusoidal spanwise variation. For these configurations, the results predicted for the extruded shape provided conservative estimates for the performance degradation of the wing. Additionally, the spanwise variation in the ice shape and the resulting differences in the flow fields did not significantly change the location of the primary reattachment.
NASA Astrophysics Data System (ADS)
Dobravec, Tadej; Mavrič, Boštjan; Šarler, Božidar
2017-11-01
A two-dimensional model to simulate the dendritic and eutectic growth in binary alloys is developed. A cellular automaton method is adopted to track the movement of the solid-liquid interface. The diffusion equation is solved in the solid and liquid phases by using an explicit finite volume method. The computational domain is divided into square cells that can be hierarchically refined or coarsened using an adaptive mesh based on the quadtree algorithm. Such a mesh refines the regions of the domain near the solid-liquid interface, where the highest concentration gradients are observed. In the regions where the lowest concentration gradients are observed the cells are coarsened. The originality of the work is in the novel, adaptive approach to the efficient and accurate solution of the posed multiscale problem. The model is verified and assessed by comparison with the analytical results of the Lipton-Glicksman-Kurz model for the steady growth of a dendrite tip and the Jackson-Hunt model for regular eutectic growth. Several examples of typical microstructures are simulated and the features of the method as well as further developments are discussed.
GRIDGEN Version 1.0: a computer program for generating unstructured finite-volume grids
Lien, Jyh-Ming; Liu, Gaisheng; Langevin, Christian D.
2015-01-01
GRIDGEN is a computer program for creating layered quadtree grids for use with numerical models, such as the MODFLOW–USG program for simulation of groundwater flow. The program begins by reading a three-dimensional base grid, which can have variable row and column widths and spatially variable cell top and bottom elevations. From this base grid, GRIDGEN will continuously divide into four any cell intersecting user-provided refinement features (points, lines, and polygons) until the desired level of refinement is reached. GRIDGEN will then smooth, or balance, the grid so that no two adjacent cells, including overlying and underlying cells, differ by more than a user-specified level tolerance. Once these gridding processes are completed, GRIDGEN saves a tree structure file so that the layered quadtree grid can be quickly reconstructed as needed. Once a tree structure file has been created, GRIDGEN can then be used to (1) export the layered quadtree grid as a shapefile, (2) export grid connectivity and cell information as ASCII text files for use with MODFLOW–USG or other numerical models, and (3) intersect the grid with shapefiles of points, lines, or polygons, and save intersection output as ASCII text files and shapefiles. The GRIDGEN program is demonstrated by creating a layered quadtree grid for the Biscayne aquifer in Miami-Dade County, Florida, using hydrologic features to control where refinement is added.
Strategies for Large Scale Implementation of a Multiscale, Multiprocess Integrated Hydrologic Model
NASA Astrophysics Data System (ADS)
Kumar, M.; Duffy, C.
2006-05-01
Distributed models simulate hydrologic state variables in space and time while taking into account the heterogeneities in terrain, surface, subsurface properties and meteorological forcings. Computational cost and complexity associated with these model increases with its tendency to accurately simulate the large number of interacting physical processes at fine spatio-temporal resolution in a large basin. A hydrologic model run on a coarse spatial discretization of the watershed with limited number of physical processes needs lesser computational load. But this negatively affects the accuracy of model results and restricts physical realization of the problem. So it is imperative to have an integrated modeling strategy (a) which can be universally applied at various scales in order to study the tradeoffs between computational complexity (determined by spatio- temporal resolution), accuracy and predictive uncertainty in relation to various approximations of physical processes (b) which can be applied at adaptively different spatial scales in the same domain by taking into account the local heterogeneity of topography and hydrogeologic variables c) which is flexible enough to incorporate different number and approximation of process equations depending on model purpose and computational constraint. An efficient implementation of this strategy becomes all the more important for Great Salt Lake river basin which is relatively large (~89000 sq. km) and complex in terms of hydrologic and geomorphic conditions. Also the types and the time scales of hydrologic processes which are dominant in different parts of basin are different. Part of snow melt runoff generated in the Uinta Mountains infiltrates and contributes as base flow to the Great Salt Lake over a time scale of decades to centuries. The adaptive strategy helps capture the steep topographic and climatic gradient along the Wasatch front. Here we present the aforesaid modeling strategy along with an associated hydrologic modeling framework which facilitates a seamless, computationally efficient and accurate integration of the process model with the data model. The flexibility of this framework leads to implementation of multiscale, multiresolution, adaptive refinement/de-refinement and nested modeling simulations with least computational burden. However, performing these simulations and related calibration of these models over a large basin at higher spatio- temporal resolutions is computationally intensive and requires use of increasing computing power. With the advent of parallel processing architectures, high computing performance can be achieved by parallelization of existing serial integrated-hydrologic-model code. This translates to running the same model simulation on a network of large number of processors thereby reducing the time needed to obtain solution. The paper also discusses the implementation of the integrated model on parallel processors. Also will be discussed the mapping of the problem on multi-processor environment, method to incorporate coupling between hydrologic processes using interprocessor communication models, model data structure and parallel numerical algorithms to obtain high performance.
Vanishing Point Extraction and Refinement for Robust Camera Calibration
Tsai, Fuan
2017-01-01
This paper describes a flexible camera calibration method using refined vanishing points without prior information. Vanishing points are estimated from human-made features like parallel lines and repeated patterns. With the vanishing points extracted from the three mutually orthogonal directions, the interior and exterior orientation parameters can be further calculated using collinearity condition equations. A vanishing point refinement process is proposed to reduce the uncertainty caused by vanishing point localization errors. The fine-tuning algorithm is based on the divergence of grouped feature points projected onto the reference plane, minimizing the standard deviation of each of the grouped collinear points with an O(1) computational complexity. This paper also presents an automated vanishing point estimation approach based on the cascade Hough transform. The experiment results indicate that the vanishing point refinement process can significantly improve camera calibration parameters and the root mean square error (RMSE) of the constructed 3D model can be reduced by about 30%. PMID:29280966
NASA Astrophysics Data System (ADS)
Avbelj, Janja; Iwaszczuk, Dorota; Müller, Rupert; Reinartz, Peter; Stilla, Uwe
2015-02-01
For image fusion in remote sensing applications the georeferencing accuracy using position, attitude, and camera calibration measurements can be insufficient. Thus, image processing techniques should be employed for precise coregistration of images. In this article a method for multimodal object-based image coregistration refinement between hyperspectral images (HSI) and digital surface models (DSM) is presented. The method is divided in three parts: object outline detection in HSI and DSM, matching, and determination of transformation parameters. The novelty of our proposed coregistration refinement method is the use of material properties and height information of urban objects from HSI and DSM, respectively. We refer to urban objects as objects which are typical in urban environments and focus on buildings by describing them with 2D outlines. Furthermore, the geometric accuracy of these detected building outlines is taken into account in the matching step and for the determination of transformation parameters. Hence, a stochastic model is introduced to compute optimal transformation parameters. The feasibility of the method is shown by testing it on two aerial HSI of different spatial and spectral resolution, and two DSM of different spatial resolution. The evaluation is carried out by comparing the accuracies of the transformations parameters to the reference parameters, determined by considering object outlines at much higher resolution, and also by computing the correctness and the quality rate of the extracted outlines before and after coregistration refinement. Results indicate that using outlines of objects instead of only line segments is advantageous for coregistration of HSI and DSM. The extraction of building outlines in comparison to the line cue extraction provides a larger amount of assigned lines between the images and is more robust to outliers, i.e. false matches.
Detonation product EOS studies: Using ISLS to refine CHEETAH
NASA Astrophysics Data System (ADS)
Zaug, Joseph; Fried, Larry; Hansen, Donald
2001-06-01
Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a suite of non-ideal simple fluids and fluid mixtures. Impulsive Stimulated Light Scattering conducted in the diamond-anvil cell offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition the kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model CHEETAH. Computational models are systematically improved with each addition of experimental data. Experimentally grounded computational models provide a good basis to confidently understand the chemical nature of reactions at extreme conditions.
Reference Solutions for Benchmark Turbulent Flows in Three Dimensions
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.; Pandya, Mohagna J.; Rumsey, Christopher L.
2016-01-01
A grid convergence study is performed to establish benchmark solutions for turbulent flows in three dimensions (3D) in support of turbulence-model verification campaign at the Turbulence Modeling Resource (TMR) website. The three benchmark cases are subsonic flows around a 3D bump and a hemisphere-cylinder configuration and a supersonic internal flow through a square duct. Reference solutions are computed for Reynolds Averaged Navier Stokes equations with the Spalart-Allmaras turbulence model using a linear eddy-viscosity model for the external flows and a nonlinear eddy-viscosity model based on a quadratic constitutive relation for the internal flow. The study involves three widely-used practical computational fluid dynamics codes developed and supported at NASA Langley Research Center: FUN3D, USM3D, and CFL3D. Reference steady-state solutions computed with these three codes on families of consistently refined grids are presented. Grid-to-grid and code-to-code variations are described in detail.
PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit
NASA Technical Reports Server (NTRS)
MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles
1999-01-01
In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.
Bechtel, William; Abrahamsen, Adele
2010-09-01
We consider computational modeling in two fields: chronobiology and cognitive science. In circadian rhythm models, variables generally correspond to properties of parts and operations of the responsible mechanism. A computational model of this complex mechanism is grounded in empirical discoveries and contributes a more refined understanding of the dynamics of its behavior. In cognitive science, on the other hand, computational modelers typically advance de novo proposals for mechanisms to account for behavior. They offer indirect evidence that a proposed mechanism is adequate to produce particular behavioral data, but typically there is no direct empirical evidence for the hypothesized parts and operations. Models in these two fields differ in the extent of their empirical grounding, but they share the goal of achieving dynamic mechanistic explanation. That is, they augment a proposed mechanistic explanation with a computational model that enables exploration of the mechanism's dynamics. Using exemplars from circadian rhythm research, we extract six specific contributions provided by computational models. We then examine cognitive science models to determine how well they make the same types of contributions. We suggest that the modeling approach used in circadian research may prove useful in cognitive science as researchers develop procedures for experimentally decomposing cognitive mechanisms into parts and operations and begin to understand their nonlinear interactions.
i3Drefine software for protein 3D structure refinement and its assessment in CASP10.
Bhattacharya, Debswapna; Cheng, Jianlin
2013-01-01
Protein structure refinement refers to the process of improving the qualities of protein structures during structure modeling processes to bring them closer to their native states. Structure refinement has been drawing increasing attention in the community-wide Critical Assessment of techniques for Protein Structure prediction (CASP) experiments since its addition in 8(th) CASP experiment. During the 9(th) and recently concluded 10(th) CASP experiments, a consistent growth in number of refinement targets and participating groups has been witnessed. Yet, protein structure refinement still remains a largely unsolved problem with majority of participating groups in CASP refinement category failed to consistently improve the quality of structures issued for refinement. In order to alleviate this need, we developed a completely automated and computationally efficient protein 3D structure refinement method, i3Drefine, based on an iterative and highly convergent energy minimization algorithm with a powerful all-atom composite physics and knowledge-based force fields and hydrogen bonding (HB) network optimization technique. In the recent community-wide blind experiment, CASP10, i3Drefine (as 'MULTICOM-CONSTRUCT') was ranked as the best method in the server section as per the official assessment of CASP10 experiment. Here we provide the community with free access to i3Drefine software and systematically analyse the performance of i3Drefine in strict blind mode on the refinement targets issued in CASP10 refinement category and compare with other state-of-the-art refinement methods participating in CASP10. Our analysis demonstrates that i3Drefine is only fully-automated server participating in CASP10 exhibiting consistent improvement over the initial structures in both global and local structural quality metrics. Executable version of i3Drefine is freely available at http://protein.rnet.missouri.edu/i3drefine/.
Yildirim, Ilyas; Park, Hajeung; Disney, Matthew D.; Schatz, George C.
2013-01-01
One class of functionally important RNA is repeating transcripts that cause disease through various mechanisms. For example, expanded r(CAG) repeats can cause Huntington’s and other disease through translation of toxic proteins. Herein, crystal structure of r[5ʹUUGGGC(CAG)3GUCC]2, a model of CAG expanded transcripts, refined to 1.65 Å resolution is disclosed that show both anti-anti and syn-anti orientations for 1×1 nucleotide AA internal loops. Molecular dynamics (MD) simulations using Amber force field in explicit solvent were run for over 500 ns on model systems r(5ʹGCGCAGCGC)2 (MS1) and r(5ʹCCGCAGCGG)2 (MS2). In these MD simulations, both anti-anti and syn-anti AA base pairs appear to be stable. While anti-anti AA base pairs were dynamic and sampled multiple anti-anti conformations, no syn-anti↔anti-anti transformations were observed. Umbrella sampling simulations were run on MS2, and a 2D free energy surface was created to extract transformation pathways. In addition, over 800 ns explicit solvent MD simulation was run on r[5ʹGGGC(CAG)3GUCC]2, which closely represents the refined crystal structure. One of the terminal AA base pairs (syn-anti conformation), transformed to anti-anti conformation. The pathway followed in this transformation was the one predicted by umbrella sampling simulations. Further analysis showed a binding pocket near AA base pairs in syn-anti conformations. Computational results combined with the refined crystal structure show that global minimum conformation of 1×1 nucleotide AA internal loops in r(CAG) repeats is anti-anti but can adopt syn-anti depending on the environment. These results are important to understand RNA dynamic-function relationships and develop small molecules that target RNA dynamic ensembles. PMID:23441937
Impact of Variable-Resolution Meshes on Regional Climate Simulations
NASA Astrophysics Data System (ADS)
Fowler, L. D.; Skamarock, W. C.; Bruyere, C. L.
2014-12-01
The Model for Prediction Across Scales (MPAS) is currently being used for seasonal-scale simulations on globally-uniform and regionally-refined meshes. Our ongoing research aims at analyzing simulations of tropical convective activity and tropical cyclone development during one hurricane season over the North Atlantic Ocean, contrasting statistics obtained with a variable-resolution mesh against those obtained with a quasi-uniform mesh. Analyses focus on the spatial distribution, frequency, and intensity of convective and grid-scale precipitations, and their relative contributions to the total precipitation as a function of the horizontal scale. Multi-month simulations initialized on May 1st 2005 using ERA-Interim re-analyses indicate that MPAS performs satisfactorily as a regional climate model for different combinations of horizontal resolutions and transitions between the coarse and refined meshes. Results highlight seamless transitions for convection, cloud microphysics, radiation, and land-surface processes between the quasi-uniform and locally- refined meshes, despite the fact that the physics parameterizations were not developed for variable resolution meshes. Our goal of analyzing the performance of MPAS is twofold. First, we want to establish that MPAS can be successfully used as a regional climate model, bypassing the need for nesting and nudging techniques at the edges of the computational domain as done in traditional regional climate modeling. Second, we want to assess the performance of our convective and cloud microphysics parameterizations as the horizontal resolution varies between the lower-resolution quasi-uniform and higher-resolution locally-refined areas of the global domain.
Impact of Variable-Resolution Meshes on Regional Climate Simulations
NASA Astrophysics Data System (ADS)
Fowler, L. D.; Skamarock, W. C.; Bruyere, C. L.
2013-12-01
The Model for Prediction Across Scales (MPAS) is currently being used for seasonal-scale simulations on globally-uniform and regionally-refined meshes. Our ongoing research aims at analyzing simulations of tropical convective activity and tropical cyclone development during one hurricane season over the North Atlantic Ocean, contrasting statistics obtained with a variable-resolution mesh against those obtained with a quasi-uniform mesh. Analyses focus on the spatial distribution, frequency, and intensity of convective and grid-scale precipitations, and their relative contributions to the total precipitation as a function of the horizontal scale. Multi-month simulations initialized on May 1st 2005 using NCEP/NCAR re-analyses indicate that MPAS performs satisfactorily as a regional climate model for different combinations of horizontal resolutions and transitions between the coarse and refined meshes. Results highlight seamless transitions for convection, cloud microphysics, radiation, and land-surface processes between the quasi-uniform and locally-refined meshes, despite the fact that the physics parameterizations were not developed for variable resolution meshes. Our goal of analyzing the performance of MPAS is twofold. First, we want to establish that MPAS can be successfully used as a regional climate model, bypassing the need for nesting and nudging techniques at the edges of the computational domain as done in traditional regional climate modeling. Second, we want to assess the performance of our convective and cloud microphysics parameterizations as the horizontal resolution varies between the lower-resolution quasi-uniform and higher-resolution locally-refined areas of the global domain.
A template-based approach for parallel hexahedral two-refinement
Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.
2016-10-17
Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less
A template-based approach for parallel hexahedral two-refinement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.
Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less
NASA Technical Reports Server (NTRS)
Engwirda, Darren
2017-01-01
An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered VoronoiDelaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.
NASA Astrophysics Data System (ADS)
Engwirda, Darren
2017-06-01
An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered Voronoi-Delaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.
Discontinuous Galerkin Methods for Turbulence Simulation
NASA Technical Reports Server (NTRS)
Collis, S. Scott
2002-01-01
A discontinuous Galerkin (DG) method is formulated, implemented, and tested for simulation of compressible turbulent flows. The method is applied to turbulent channel flow at low Reynolds number, where it is found to successfully predict low-order statistics with fewer degrees of freedom than traditional numerical methods. This reduction is achieved by utilizing local hp-refinement such that the computational grid is refined simultaneously in all three spatial coordinates with decreasing distance from the wall. Another advantage of DG is that Dirichlet boundary conditions can be enforced weakly through integrals of the numerical fluxes. Both for a model advection-diffusion problem and for turbulent channel flow, weak enforcement of wall boundaries is found to improve results at low resolution. Such weak boundary conditions may play a pivotal role in wall modeling for large-eddy simulation.
Programming new geometry restraints: Parallelity of atomic groups
Sobolev, Oleg V.; Afonine, Pavel V.; Adams, Paul D.; ...
2015-08-01
Improvements in structural biology methods, in particular crystallography and cryo-electron microscopy, have created an increased demand for the refinement of atomic models against low-resolution experimental data. One way to compensate for the lack of high-resolution experimental data is to use a priori information about model geometry that can be utilized in refinement in the form of stereochemical restraints or constraints. Here, the definition and calculation of the restraints that can be imposed on planar atomic groups, in particular the angle between such groups, are described. Detailed derivations of the restraint targets and their gradients are provided so that they canmore » be readily implemented in other contexts. Practical implementations of the restraints, and of associated data structures, in the Computational Crystallography Toolbox( cctbx) are presented.« less
NASA Astrophysics Data System (ADS)
Bowles, C.
2013-12-01
Ecological engineering, or eco engineering, is an emerging field in the study of integrating ecology and engineering, concerned with the design, monitoring, and construction of ecosystems. According to Mitsch (1996) 'the design of sustainable ecosystems intends to integrate human society with its natural environment for the benefit of both'. Eco engineering emerged as a new idea in the early 1960s, and the concept has seen refinement since then. As a commonly practiced field of engineering it is relatively novel. Howard Odum (1963) and others first introduced it as 'utilizing natural energy sources as the predominant input to manipulate and control environmental systems'. Mtisch and Jorgensen (1989) were the first to define eco engineering, to provide eco engineering principles and conceptual eco engineering models. Later they refined the definition and increased the number of principles. They suggested that the goals of eco engineering are: a) the restoration of ecosystems that have been substantially disturbed by human activities such as environmental pollution or land disturbance, and b) the development of new sustainable ecosystems that have both human and ecological values. Here a more detailed overview of eco engineering is provided, particularly with regard to how engineers and ecologists are utilizing multi-dimensional computational models to link ecology and engineering, resulting in increasingly successful project implementation. Descriptions are provided pertaining to 1-, 2- and 3-dimensional hydrodynamic models and their use at small- and large-scale applications. A range of conceptual models that have been developed to aid the in the creation of linkages between ecology and engineering are discussed. Finally, several case studies that link ecology and engineering via computational modeling are provided. These studies include localized stream rehabilitation, spawning gravel enhancement on a large river system, and watershed-wide floodplain modeling of the Sacramento River Valley.
Investigation of Carbohydrate Recognition via Computer Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Quentin R.; Lindsay, Richard J.; Petridis, Loukas
Carbohydrate recognition by proteins, such as lectins and other (bio)molecules, can be essential for many biological functions. Interest has arisen due to potential protein and drug design and future bioengineering applications. A quantitative measurement of carbohydrate-protein interaction is thus important for the full characterization of sugar recognition. Here, we focus on the aspect of utilizing computer simulations and biophysical models to evaluate the strength and specificity of carbohydrate recognition in this review. With increasing computational resources, better algorithms and refined modeling parameters, using state-of-the-art supercomputers to calculate the strength of the interaction between molecules has become increasingly mainstream. We reviewmore » the current state of this technique and its successful applications for studying protein-sugar interactions in recent years.« less
Investigation of Carbohydrate Recognition via Computer Simulation
Johnson, Quentin R.; Lindsay, Richard J.; Petridis, Loukas; ...
2015-04-28
Carbohydrate recognition by proteins, such as lectins and other (bio)molecules, can be essential for many biological functions. Interest has arisen due to potential protein and drug design and future bioengineering applications. A quantitative measurement of carbohydrate-protein interaction is thus important for the full characterization of sugar recognition. Here, we focus on the aspect of utilizing computer simulations and biophysical models to evaluate the strength and specificity of carbohydrate recognition in this review. With increasing computational resources, better algorithms and refined modeling parameters, using state-of-the-art supercomputers to calculate the strength of the interaction between molecules has become increasingly mainstream. We reviewmore » the current state of this technique and its successful applications for studying protein-sugar interactions in recent years.« less
Mehl, Steffen W.; Hill, Mary C.
2013-01-01
This report documents the addition of ghost node Local Grid Refinement (LGR2) to MODFLOW-2005, the U.S. Geological Survey modular, transient, three-dimensional, finite-difference groundwater flow model. LGR2 provides the capability to simulate groundwater flow using multiple block-shaped higher-resolution local grids (a child model) within a coarser-grid parent model. LGR2 accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the grid-refinement interface boundary. LGR2 can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined groundwater systems. Traditional one-way coupled telescopic mesh refinement methods can have large, often undetected, inconsistencies in heads and fluxes across the interface between two model grids. The iteratively coupled ghost-node method of LGR2 provides a more rigorous coupling in which the solution accuracy is controlled by convergence criteria defined by the user. In realistic problems, this can result in substantially more accurate solutions and require an increase in computer processing time. The rigorous coupling enables sensitivity analysis, parameter estimation, and uncertainty analysis that reflects conditions in both model grids. This report describes the method used by LGR2, evaluates accuracy and performance for two-and three-dimensional test cases, provides input instructions, and lists selected input and output files for an example problem. It also presents the Boundary Flow and Head (BFH2) Package, which allows the child and parent models to be simulated independently using the boundary conditions obtained through the iterative process of LGR2.
Towards An Engineering Discipline of Computational Security
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mili, Ali; Sheldon, Frederick T; Jilani, Lamia Labed
2007-01-01
George Boole ushered the era of modern logic by arguing that logical reasoning does not fall in the realm of philosophy, as it was considered up to his time, but in the realm of mathematics. As such, logical propositions and logical arguments are modeled using algebraic structures. Likewise, we submit that security attributes must be modeled as formal mathematical propositions that are subject to mathematical analysis. In this paper, we approach this problem by attempting to model security attributes in a refinement-like framework that has traditionally been used to represent reliability and safety claims. Keywords: Computable security attributes, survivability, integrity,more » dependability, reliability, safety, security, verification, testing, fault tolerance.« less
Predicting X-ray diffuse scattering from translation–libration–screw structural ensembles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Benschoten, Andrew H.; Afonine, Pavel V.; Terwilliger, Thomas C.
2015-07-28
A method of simulating X-ray diffuse scattering from multi-model PDB files is presented. Despite similar agreement with Bragg data, different translation–libration–screw refinement strategies produce unique diffuse intensity patterns. Identifying the intramolecular motions of proteins and nucleic acids is a major challenge in macromolecular X-ray crystallography. Because Bragg diffraction describes the average positional distribution of crystalline atoms with imperfect precision, the resulting electron density can be compatible with multiple models of motion. Diffuse X-ray scattering can reduce this degeneracy by reporting on correlated atomic displacements. Although recent technological advances are increasing the potential to accurately measure diffuse scattering, computational modeling andmore » validation tools are still needed to quantify the agreement between experimental data and different parameterizations of crystalline disorder. A new tool, phenix.diffuse, addresses this need by employing Guinier’s equation to calculate diffuse scattering from Protein Data Bank (PDB)-formatted structural ensembles. As an example case, phenix.diffuse is applied to translation–libration–screw (TLS) refinement, which models rigid-body displacement for segments of the macromolecule. To enable the calculation of diffuse scattering from TLS-refined structures, phenix.tls-as-xyz builds multi-model PDB files that sample the underlying T, L and S tensors. In the glycerophosphodiesterase GpdQ, alternative TLS-group partitioning and different motional correlations between groups yield markedly dissimilar diffuse scattering maps with distinct implications for molecular mechanism and allostery. These methods demonstrate how, in principle, X-ray diffuse scattering could extend macromolecular structural refinement, validation and analysis.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritchie, L.T.; Alpert, D.J.; Burke, R.P.
1984-03-01
The CRAC2 computer code is a revised version of CRAC (Calculation of Reactor Accident Consequences) which was developed for the Reactor Safety Study. This document provides an overview of the CRAC2 code and a description of each of the models used. Significant improvements incorporated into CRAC2 include an improved weather sequence sampling technique, a new evacuation model, and new output capabilities. In addition, refinements have been made to the atmospheric transport and deposition model. Details of the modeling differences between CRAC2 and CRAC are emphasized in the model descriptions.
NASA Astrophysics Data System (ADS)
Mössinger, Peter; Jester-Zürker, Roland; Jung, Alexander
2015-01-01
Numerical investigations of hydraulic turbo machines under steady-state conditions are state of the art in current product development processes. Nevertheless allow increasing computational resources refined discretization methods, more sophisticated turbulence models and therefore better predictions of results as well as the quantification of existing uncertainties. Single stage investigations are done using in-house tools for meshing and set-up procedure. Beside different model domains and a mesh study to reduce mesh dependencies, the variation of several eddy viscosity and Reynolds stress turbulence models are investigated. All obtained results are compared with available model test data. In addition to global values, measured magnitudes in the vaneless space, at runner blade and draft tube positions in term of pressure and velocity are considered. From there it is possible to estimate the influence and relevance of various model domains depending on different operating points and numerical variations. Good agreement can be found for pressure and velocity measurements with all model configurations and, except the BSL-RSM model, all turbulence models. At part load, deviations in hydraulic efficiency are at a large magnitude, whereas at best efficiency and high load operating point efficiencies are close to the measurement. A consideration of the runner side gap geometry as well as a refined mesh is able to improve the results either in relation to hydraulic efficiency or velocity distribution with the drawbacks of less stable numerics and increasing computational time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chrisochoides, N.; Sukup, F.
In this paper we present a parallel implementation of the Bowyer-Watson (BW) algorithm using the task-parallel programming model. The BW algorithm constitutes an ideal mesh refinement strategy for implementing a large class of unstructured mesh generation techniques on both sequential and parallel computers, by preventing the need for global mesh refinement. Its implementation on distributed memory multicomputes using the traditional data-parallel model has been proven very inefficient due to excessive synchronization needed among processors. In this paper we demonstrate that with the task-parallel model we can tolerate synchronization costs inherent to data-parallel methods by exploring concurrency in the processor level.more » Our preliminary performance data indicate that the task- parallel approach: (i) is almost four times faster than the existing data-parallel methods, (ii) scales linearly, and (iii) introduces minimum overheads compared to the {open_quotes}best{close_quotes} sequential implementation of the BW algorithm.« less
Milly, Paul C.D.; Dunne, Krista A.
2011-01-01
Hydrologic models often are applied to adjust projections of hydroclimatic change that come from climate models. Such adjustment includes climate-bias correction, spatial refinement ("downscaling"), and consideration of the roles of hydrologic processes that were neglected in the climate model. Described herein is a quantitative analysis of the effects of hydrologic adjustment on the projections of runoff change associated with projected twenty-first-century climate change. In a case study including three climate models and 10 river basins in the contiguous United States, the authors find that relative (i.e., fractional or percentage) runoff change computed with hydrologic adjustment more often than not was less positive (or, equivalently, more negative) than what was projected by the climate models. The dominant contributor to this decrease in runoff was a ubiquitous change in runoff (median -11%) caused by the hydrologic model’s apparent amplification of the climate-model-implied growth in potential evapotranspiration. Analysis suggests that the hydrologic model, on the basis of the empirical, temperature-based modified Jensen–Haise formula, calculates a change in potential evapotranspiration that is typically 3 times the change implied by the climate models, which explicitly track surface energy budgets. In comparison with the amplification of potential evapotranspiration, central tendencies of other contributions from hydrologic adjustment (spatial refinement, climate-bias adjustment, and process refinement) were relatively small. The authors’ findings highlight the need for caution when projecting changes in potential evapotranspiration for use in hydrologic models or drought indices to evaluate climate-change impacts on water.
Liu, Zhenyu; Szarecka, Agnieszka; Yonkunas, Michael; Speranskiy, Kirill; Kurnikova, Maria; Cascio, Michael
2014-01-01
The glycine receptor (GlyR), a member of the pentameric ligand-gated ion channel superfamily, is the major inhibitory neurotransmitter-gated receptor in the spinal cord and brainstem. In these receptors, the extracellular domain binds agonists, antagonists and various other modulatory ligands that act allosterically to modulate receptor function. The structures of homologous receptors and binding proteins provide templates for modeling of the ligand-binding domain of GlyR, but limitations in sequence homology and structure resolution impact on modeling studies. The determination of distance constraints via chemical crosslinking studies coupled with mass spectrometry can provide additional structural information to aid in model refinement, however it is critical to be able to distinguish between intra- and inter-subunit constraints. In this report we model the structure of GlyBP, a structural and functional homolog of the extracellular domain of human homomeric α1 GlyR. We then show that intra- and intersubunit Lys-Lys crosslinks in trypsinized samples of purified monomeric and oligomeric protein bands from SDS-polyacrylamide gels may be identified and differentiated by MALDI-TOF MS studies of limited resolution. Thus, broadly available MS platforms are capable of providing distance constraints that may be utilized in characterizing large complexes that may be less amenable to NMR and crystallographic studies. Systematic studies of state-dependent chemical crosslinking and mass spectrometric identification of crosslinked sites has the potential to complement computational modeling efforts by providing constraints that can validate and refine allosteric models. PMID:25025226
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, P.; Seth, D.L.; Ray, A.K.
A detailed and systematic study of the nature of the discretization error associated with the upwind finite-difference method is presented. A basic model problem has been identified and based upon the results for this problem, a basic hypothesis regarding the accuracy of the computational solution of the Spencer-Lewis equation is formulated. The basic hypothesis is then tested under various systematic single complexifications of the basic model problem. The results of these tests provide the framework of the refined hypothesis presented in the concluding comments. 27 refs., 3 figs., 14 tabs.
Towards Modeling False Memory With Computational Knowledge Bases.
Li, Justin; Kohanyi, Emma
2017-01-01
One challenge to creating realistic cognitive models of memory is the inability to account for the vast common-sense knowledge of human participants. Large computational knowledge bases such as WordNet and DBpedia may offer a solution to this problem but may pose other challenges. This paper explores some of these difficulties through a semantic network spreading activation model of the Deese-Roediger-McDermott false memory task. In three experiments, we show that these knowledge bases only capture a subset of human associations, while irrelevant information introduces noise and makes efficient modeling difficult. We conclude that the contents of these knowledge bases must be augmented and, more important, that the algorithms must be refined and optimized, before large knowledge bases can be widely used for cognitive modeling. Copyright © 2016 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Jahandari, H.; Farquharson, C. G.
2017-11-01
Unstructured grids enable representing arbitrary structures more accurately and with fewer cells compared to regular structured grids. These grids also allow more efficient refinements compared to rectilinear meshes. In this study, tetrahedral grids are used for the inversion of magnetotelluric (MT) data, which allows for the direct inclusion of topography in the model, for constraining an inversion using a wireframe-based geological model and for local refinement at the observation stations. A minimum-structure method with an iterative model-space Gauss-Newton algorithm for optimization is used. An iterative solver is employed for solving the normal system of equations at each Gauss-Newton step and the sensitivity matrix-vector products that are required by this solver are calculated using pseudo-forward problems. This method alleviates the need to explicitly form the Hessian or Jacobian matrices which significantly reduces the required computation memory. Forward problems are formulated using an edge-based finite-element approach and a sparse direct solver is used for the solutions. This solver allows saving and re-using the factorization of matrices for similar pseudo-forward problems within a Gauss-Newton iteration which greatly minimizes the computation time. Two examples are presented to show the capability of the algorithm: the first example uses a benchmark model while the second example represents a realistic geological setting with topography and a sulphide deposit. The data that are inverted are the full-tensor impedance and the magnetic transfer function vector. The inversions sufficiently recovered the models and reproduced the data, which shows the effectiveness of unstructured grids for complex and realistic MT inversion scenarios. The first example is also used to demonstrate the computational efficiency of the presented model-space method by comparison with its data-space counterpart.
SOMAR-LES: A framework for multi-scale modeling of turbulent stratified oceanic flows
NASA Astrophysics Data System (ADS)
Chalamalla, Vamsi K.; Santilli, Edward; Scotti, Alberto; Jalali, Masoud; Sarkar, Sutanu
2017-12-01
A new multi-scale modeling technique, SOMAR-LES, is presented in this paper. Localized grid refinement gives SOMAR (the Stratified Ocean Model with Adaptive Resolution) access to small scales of the flow which are normally inaccessible to general circulation models (GCMs). SOMAR-LES drives a LES (Large Eddy Simulation) on SOMAR's finest grids, forced with large scale forcing from the coarser grids. Three-dimensional simulations of internal tide generation, propagation and scattering are performed to demonstrate this multi-scale modeling technique. In the case of internal tide generation at a two-dimensional bathymetry, SOMAR-LES is able to balance the baroclinic energy budget and accurately model turbulence losses at only 10% of the computational cost required by a non-adaptive solver running at SOMAR-LES's fine grid resolution. This relative cost is significantly reduced in situations with intermittent turbulence or where the location of the turbulence is not known a priori because SOMAR-LES does not require persistent, global, high resolution. To illustrate this point, we consider a three-dimensional bathymetry with grids adaptively refined along the tidally generated internal waves to capture remote mixing in regions of wave focusing. The computational cost in this case is found to be nearly 25 times smaller than that of a non-adaptive solver at comparable resolution. In the final test case, we consider the scattering of a mode-1 internal wave at an isolated two-dimensional and three-dimensional topography, and we compare the results with Legg (2014) numerical experiments. We find good agreement with theoretical estimates. SOMAR-LES is less dissipative than the closure scheme employed by Legg (2014) near the bathymetry. Depending on the flow configuration and resolution employed, a reduction of more than an order of magnitude in computational costs is expected, relative to traditional existing solvers.
Olson, Mark A; Feig, Michael; Brooks, Charles L
2008-04-15
This article examines ab initio methods for the prediction of protein loops by a computational strategy of multiscale conformational sampling and physical energy scoring functions. Our approach consists of initial sampling of loop conformations from lattice-based low-resolution models followed by refinement using all-atom simulations. To allow enhanced conformational sampling, the replica exchange method was implemented. Physical energy functions based on CHARMM19 and CHARMM22 parameterizations with generalized Born (GB) solvent models were applied in scoring loop conformations extracted from the lattice simulations and, in the case of all-atom simulations, the ensemble of conformations were generated and scored with these models. Predictions are reported for 25 loop segments, each eight residues long and taken from a diverse set of 22 protein structures. We find that the simulations generally sampled conformations with low global root-mean-square-deviation (RMSD) for loop backbone coordinates from the known structures, whereas clustering conformations in RMSD space and scoring detected less favorable loop structures. Specifically, the lattice simulations sampled basins that exhibited an average global RMSD of 2.21 +/- 1.42 A, whereas clustering and scoring the loop conformations determined an RMSD of 3.72 +/- 1.91 A. Using CHARMM19/GB to refine the lattice conformations improved the sampling RMSD to 1.57 +/- 0.98 A and detection to 2.58 +/- 1.48 A. We found that further improvement could be gained from extending the upper temperature in the all-atom refinement from 400 to 800 K, where the results typically yield a reduction of approximately 1 A or greater in the RMSD of the detected loop. Overall, CHARMM19 with a simple pairwise GB solvent model is more efficient at sampling low-RMSD loop basins than CHARMM22 with a higher-resolution modified analytical GB model; however, the latter simulation method provides a more accurate description of the all-atom energy surface, yet demands a much greater computational cost. (c) 2007 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Li, Gaohua; Fu, Xiang; Wang, Fuxin
2017-10-01
The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.
Tian, Ye; Schwieters, Charles D.; Opella, Stanley J.; Marassi, Francesca M.
2011-01-01
AssignFit is a computer program developed within the XPLOR-NIH package for the assignment of dipolar coupling (DC) and chemical shift anisotropy (CSA) restraints derived from the solid-state NMR spectra of protein samples with uniaxial order. The method is based on minimizing the difference between experimentally observed solid-state NMR spectra and the frequencies back calculated from a structural model. Starting with a structural model and a set of DC and CSA restraints grouped only by amino acid type, as would be obtained by selective isotopic labeling, AssignFit generates all of the possible assignment permutations and calculates the corresponding atomic coordinates oriented in the alignment frame, together with the associated set of NMR frequencies, which are then compared with the experimental data for best fit. Incorporation of AssignFit in a simulated annealing refinement cycle provides an approach for simultaneous assignment and structure refinement (SASR) of proteins from solid-state NMR orientation restraints. The methods are demonstrated with data from two integral membrane proteins, one α-helical and one β-barrel, embedded in phospholipid bilayer membranes. PMID:22036904
NASA Astrophysics Data System (ADS)
Pantano, Carlos
2005-11-01
We describe a hybrid finite difference method for large-eddy simulation (LES) of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). Numerical experiments and validation calculations are presented including a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability. The approach is a conservative flux-based SAMR formulation and as such, it utilizes refinement to computational advantage. The numerical method for the resolved scale terms encompasses the cases of scheme alternation and internal mesh interfaces resulting from SAMR. An explicit centered scheme that is consistent with a skew-symmetric finite difference formulation is used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. The subgrid stresses and transports are calculated by means of the streched-vortex model, Misra & Pullin (1997)
Adaptive mesh refinement and load balancing based on multi-level block-structured Cartesian mesh
NASA Astrophysics Data System (ADS)
Misaka, Takashi; Sasaki, Daisuke; Obayashi, Shigeru
2017-11-01
We developed a framework for a distributed-memory parallel computer that enables dynamic data management for adaptive mesh refinement and load balancing. We employed simple data structure of the building cube method (BCM) where a computational domain is divided into multi-level cubic domains and each cube has the same number of grid points inside, realising a multi-level block-structured Cartesian mesh. Solution adaptive mesh refinement, which works efficiently with the help of the dynamic load balancing, was implemented by dividing cubes based on mesh refinement criteria. The framework was investigated with the Laplace equation in terms of adaptive mesh refinement, load balancing and the parallel efficiency. It was then applied to the incompressible Navier-Stokes equations to simulate a turbulent flow around a sphere. We considered wall-adaptive cube refinement where a non-dimensional wall distance y+ near the sphere is used for a criterion of mesh refinement. The result showed the load imbalance due to y+ adaptive mesh refinement was corrected by the present approach. To utilise the BCM framework more effectively, we also tested a cube-wise algorithm switching where an explicit and implicit time integration schemes are switched depending on the local Courant-Friedrichs-Lewy (CFL) condition in each cube.
Hybrid reduced order modeling for assembly calculations
Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; ...
2015-08-14
While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the usemore » of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.« less
Identification of cascade water tanks using a PWARX model
NASA Astrophysics Data System (ADS)
Mattsson, Per; Zachariah, Dave; Stoica, Petre
2018-06-01
In this paper we consider the identification of a discrete-time nonlinear dynamical model for a cascade water tank process. The proposed method starts with a nominal linear dynamical model of the system, and proceeds to model its prediction errors using a model that is piecewise affine in the data. As data is observed, the nominal model is refined into a piecewise ARX model which can capture a wide range of nonlinearities, such as the saturation in the cascade tanks. The proposed method uses a likelihood-based methodology which adaptively penalizes model complexity and directly leads to a computationally efficient implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oubeidillah, Abdoul A; Kao, Shih-Chieh; Ashfaq, Moetasim
2014-01-01
To extend geographical coverage, refine spatial resolution, and improve modeling efficiency, a computation- and data-intensive effort was conducted to organize a comprehensive hydrologic dataset with post-calibrated model parameters for hydro-climate impact assessment. Several key inputs for hydrologic simulation including meteorologic forcings, soil, land class, vegetation, and elevation were collected from multiple best-available data sources and organized for 2107 hydrologic subbasins (8-digit hydrologic units, HUC8s) in the conterminous United States at refined 1/24 (~4 km) spatial resolution. Using high-performance computing for intensive model calibration, a high-resolution parameter dataset was prepared for the macro-scale Variable Infiltration Capacity (VIC) hydrologic model. The VICmore » simulation was driven by DAYMET daily meteorological forcing and was calibrated against USGS WaterWatch monthly runoff observations for each HUC8. The results showed that this new parameter dataset may help reasonably simulate runoff at most US HUC8 subbasins. Based on this exhaustive calibration effort, it is now possible to accurately estimate the resources required for further model improvement across the entire conterminous United States. We anticipate that through this hydrologic parameter dataset, the repeated effort of fundamental data processing can be lessened, so that research efforts can emphasize the more challenging task of assessing climate change impacts. The pre-organized model parameter dataset will be provided to interested parties to support further hydro-climate impact assessment.« less
Computational Process Modeling for Additive Manufacturing
NASA Technical Reports Server (NTRS)
Bagg, Stacey; Zhang, Wei
2014-01-01
Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.
NASA Astrophysics Data System (ADS)
Schwing, Alan Michael
For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable comparisons across a range of regimes. Unsteady and steady applications are considered in both subsonic and supersonic flows. Inviscid and viscous simulations achieve similar results at a much reduced cost when employing dynamic mesh adaptation. Several techniques for guiding adaptation are compared. Detailed analysis of statistics from the instrumented solver enable understanding of the costs associated with adaptation. Adaptive mesh refinement shows promise for the test cases presented here. It can be considerably faster than using conventional grids and provides accurate results. The procedures for adapting the grid are light-weight enough to not require significant computational time and yield significant reductions in grid size.
Comparison of local grid refinement methods for MODFLOW
Mehl, S.; Hill, M.C.; Leake, S.A.
2006-01-01
Many ground water modeling efforts use a finite-difference method to solve the ground water flow equation, and many of these models require a relatively fine-grid discretization to accurately represent the selected process in limited areas of interest. Use of a fine grid over the entire domain can be computationally prohibitive; using a variably spaced grid can lead to cells with a large aspect ratio and refinement in areas where detail is not needed. One solution is to use local-grid refinement (LGR) whereby the grid is only refined in the area of interest. This work reviews some LGR methods and identifies advantages and drawbacks in test cases using MODFLOW-2000. The first test case is two dimensional and heterogeneous; the second is three dimensional and includes interaction with a meandering river. Results include simulations using a uniform fine grid, a variably spaced grid, a traditional method of LGR without feedback, and a new shared node method with feedback. Discrepancies from the solution obtained with the uniform fine grid are investigated. For the models tested, the traditional one-way coupled approaches produced discrepancies in head up to 6.8% and discrepancies in cell-to-cell fluxes up to 7.1%, while the new method has head and cell-to-cell flux discrepancies of 0.089% and 0.14%, respectively. Additional results highlight the accuracy, flexibility, and CPU time trade-off of these methods and demonstrate how the new method can be successfully implemented to model surface water-ground water interactions. Copyright ?? 2006 The Author(s).
The Quality and Validation of Structures from Structural Genomics
Domagalski, Marcin J.; Zheng, Heping; Zimmerman, Matthew D.; Dauter, Zbigniew; Wlodawer, Alexander; Minor, Wladek
2014-01-01
Quality control of three-dimensional structures of macromolecules is a critical step to ensure the integrity of structural biology data, especially those produced by structural genomics centers. Whereas the Protein Data Bank (PDB) has proven to be a remarkable success overall, the inconsistent quality of structures reveals a lack of universal standards for structure/deposit validation. Here, we review the state-of-the-art methods used in macromolecular structure validation, focusing on validation of structures determined by X-ray crystallography. We describe some general protocols used in the rebuilding and re-refinement of problematic structural models. We also briefly discuss some frontier areas of structure validation, including refinement of protein–ligand complexes, automation of structure redetermination, and the use of NMR structures and computational models to solve X-ray crystal structures by molecular replacement. PMID:24203341
NASA Astrophysics Data System (ADS)
Guan, Xiaofei; Pal, Uday B.; Powell, Adam C.
2013-10-01
Magnesium is recovered from partially oxidized scrap alloy by combining refining and solid oxide membrane (SOM) electrolysis. In this combined process, a molten salt eutectic flux (45 wt.% MgF2-55 wt.% CaF2) containing 10 wt.% MgO and 2 wt.% YF3 was used as the medium for magnesium recovery. During refining, magnesium and its oxide are dissolved from the scrap into the molten flux. Forming gas is bubbled through the flux and the dissolved magnesium is removed via the gas phase and condensed in a separate condenser at a lower temperature. The molten flux has a finite solubility for magnesium and acts as a selective medium for magnesium dissolution, but not aluminum or iron, and therefore the magnesium recovered has high purity. After refining, SOM electrolysis is performed in the same reactor to enable electrolysis of the dissolved magnesium oxide in the molten flux producing magnesium at the cathode and oxygen at the SOM anode. During SOM electrolysis, it is necessary to decrease the concentration of the dissolved magnesium in the flux to improve the faradaic current efficiency and prevent degradation of the SOM. Thus, for both refining and SOM electrolysis, it is very important to measure and control the magnesium solubility in the molten flux. High magnesium solubility facilitates refining whereas lower solubility benefits the SOM electrolysis process. Computational fluid dynamics modeling was employed to simulate the flow behavior of the flux stirred by the forming gas. Based on the modeling results, an optimized design of the stirring tubes and its placement in the flux are determined for efficiently removing the dissolved magnesium and also increasing the efficiency of the SOM electrolysis process.
The quest for solvable multistate Landau-Zener models
Sinitsyn, Nikolai A.; Chernyak, Vladimir Y.
2017-05-24
Recently, integrability conditions (ICs) in mutistate Landau-Zener (MLZ) theory were proposed. They describe common properties of all known solved systems with linearly time-dependent Hamiltonians. Here we show that ICs enable efficient computer assisted search for new solvable MLZ models that span complexity range from several interacting states to mesoscopic systems with many-body dynamics and combinatorially large phase space. This diversity suggests that nontrivial solvable MLZ models are numerous. Additionally, we refine the formulation of ICs and extend the class of solvable systems to models with points of multiple diabatic level crossing.
Multiscale Modeling of Damage Processes in fcc Aluminum: From Atoms to Grains
NASA Technical Reports Server (NTRS)
Glaessgen, E. H.; Saether, E.; Yamakov, V.
2008-01-01
Molecular dynamics (MD) methods are opening new opportunities for simulating the fundamental processes of material behavior at the atomistic level. However, current analysis is limited to small domains and increasing the size of the MD domain quickly presents intractable computational demands. A preferred approach to surmount this computational limitation has been to combine continuum mechanics-based modeling procedures, such as the finite element method (FEM), with MD analyses thereby reducing the region of atomic scale refinement. Such multiscale modeling strategies can be divided into two broad classifications: concurrent multiscale methods that directly incorporate an atomistic domain within a continuum domain and sequential multiscale methods that extract an averaged response from the atomistic simulation for later use as a constitutive model in a continuum analysis.
An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces.
Li, Simin; Li, Jie; Li, Zheng
2016-01-01
Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well.
An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces
Li, Simin; Li, Jie; Li, Zheng
2016-01-01
Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well. PMID:28066170
Understanding Plant Nitrogen Metabolism through Metabolomics and Computational Approaches
Beatty, Perrin H.; Klein, Matthias S.; Fischer, Jeffrey J.; Lewis, Ian A.; Muench, Douglas G.; Good, Allen G.
2016-01-01
A comprehensive understanding of plant metabolism could provide a direct mechanism for improving nitrogen use efficiency (NUE) in crops. One of the major barriers to achieving this outcome is our poor understanding of the complex metabolic networks, physiological factors, and signaling mechanisms that affect NUE in agricultural settings. However, an exciting collection of computational and experimental approaches has begun to elucidate whole-plant nitrogen usage and provides an avenue for connecting nitrogen-related phenotypes to genes. Herein, we describe how metabolomics, computational models of metabolism, and flux balance analysis have been harnessed to advance our understanding of plant nitrogen metabolism. We introduce a model describing the complex flow of nitrogen through crops in a real-world agricultural setting and describe how experimental metabolomics data, such as isotope labeling rates and analyses of nutrient uptake, can be used to refine these models. In summary, the metabolomics/computational approach offers an exciting mechanism for understanding NUE that may ultimately lead to more effective crop management and engineered plants with higher yields. PMID:27735856
Milly, P.C.D.; Dunne, K.A.
2011-01-01
Hydrologic models often are applied to adjust projections of hydroclimatic change that come from climate models. Such adjustment includes climate-bias correction, spatial refinement ("downscaling"), and consideration of the roles of hydrologic processes that were neglected in the climate model. Described herein is a quantitative analysis of the effects of hydrologic adjustment on the projections of runoff change associated with projected twenty-first-century climate change. In a case study including three climate models and 10 river basins in the contiguous United States, the authors find that relative (i.e., fractional or percentage) runoff change computed with hydrologic adjustment more often than not was less positive (or, equivalently, more negative) than what was projected by the climate models. The dominant contributor to this decrease in runoff was a ubiquitous change in runoff (median 211%) caused by the hydrologic model's apparent amplification of the climate-model-implied growth in potential evapotranspiration. Analysis suggests that the hydrologic model, on the basis of the empirical, temperature-based modified Jensen-Haise formula, calculates a change in potential evapotranspiration that is typically 3 times the change implied by the climate models, which explicitly track surface energy budgets. In comparison with the amplification of potential evapotranspiration, central tendencies of other contributions from hydrologic adjustment (spatial refinement, climate-bias adjustment, and process refinement) were relatively small. The authors' findings highlight the need for caution when projecting changes in potential evapotranspiration for use in hydrologic models or drought indices to evaluate climatechange impacts on water. Copyright ?? 2011, Paper 15-001; 35,952 words, 3 Figures, 0 Animations, 1 Tables.
An adaptive discontinuous Galerkin solver for aerodynamic flows
NASA Astrophysics Data System (ADS)
Burgess, Nicholas K.
This work considers the accuracy, efficiency, and robustness of an unstructured high-order accurate discontinuous Galerkin (DG) solver for computational fluid dynamics (CFD). Recently, there has been a drive to reduce the discretization error of CFD simulations using high-order methods on unstructured grids. However, high-order methods are often criticized for lacking robustness and having high computational cost. The goal of this work is to investigate methods that enhance the robustness of high-order discontinuous Galerkin (DG) methods on unstructured meshes, while maintaining low computational cost and high accuracy of the numerical solutions. This work investigates robustness enhancement of high-order methods by examining effective non-linear solvers, shock capturing methods, turbulence model discretizations and adaptive refinement techniques. The goal is to develop an all encompassing solver that can simulate a large range of physical phenomena, where all aspects of the solver work together to achieve a robust, efficient and accurate solution strategy. The components and framework for a robust high-order accurate solver that is capable of solving viscous, Reynolds Averaged Navier-Stokes (RANS) and shocked flows is presented. In particular, this work discusses robust discretizations of the turbulence model equation used to close the RANS equations, as well as stable shock capturing strategies that are applicable across a wide range of discretization orders and applicable to very strong shock waves. Furthermore, refinement techniques are considered as both efficiency and robustness enhancement strategies. Additionally, efficient non-linear solvers based on multigrid and Krylov subspace methods are presented. The accuracy, efficiency, and robustness of the solver is demonstrated using a variety of challenging aerodynamic test problems, which include turbulent high-lift and viscous hypersonic flows. Adaptive mesh refinement was found to play a critical role in obtaining a robust and efficient high-order accurate flow solver. A goal-oriented error estimation technique has been developed to estimate the discretization error of simulation outputs. For high-order discretizations, it is shown that functional output error super-convergence can be obtained, provided the discretization satisfies a property known as dual consistency. The dual consistency of the DG methods developed in this work is shown via mathematical analysis and numerical experimentation. Goal-oriented error estimation is also used to drive an hp-adaptive mesh refinement strategy, where a combination of mesh or h-refinement, and order or p-enrichment, is employed based on the smoothness of the solution. The results demonstrate that the combination of goal-oriented error estimation and hp-adaptation yield superior accuracy, as well as enhanced robustness and efficiency for a variety of aerodynamic flows including flows with strong shock waves. This work demonstrates that DG discretizations can be the basis of an accurate, efficient, and robust CFD solver. Furthermore, enhancing the robustness of DG methods does not adversely impact the accuracy or efficiency of the solver for challenging and complex flow problems. In particular, when considering the computation of shocked flows, this work demonstrates that the available shock capturing techniques are sufficiently accurate and robust, particularly when used in conjunction with adaptive mesh refinement . This work also demonstrates that robust solutions of the Reynolds Averaged Navier-Stokes (RANS) and turbulence model equations can be obtained for complex and challenging aerodynamic flows. In this context, the most robust strategy was determined to be a low-order turbulence model discretization coupled to a high-order discretization of the RANS equations. Although RANS solutions using high-order accurate discretizations of the turbulence model were obtained, the behavior of current-day RANS turbulence models discretized to high-order was found to be problematic, leading to solver robustness issues. This suggests that future work is warranted in the area of turbulence model formulation for use with high-order discretizations. Alternately, the use of Large-Eddy Simulation (LES) subgrid scale models with high-order DG methods offers the potential to leverage the high accuracy of these methods for very high fidelity turbulent simulations. This thesis has developed the algorithmic improvements that will lay the foundation for the development of a three-dimensional high-order flow solution strategy that can be used as the basis for future LES simulations.
NASA Technical Reports Server (NTRS)
Hargrove, A.
1982-01-01
Optimal digital control of nonlinear multivariable constrained systems was studied. The optimal controller in the form of an algorithm was improved and refined by reducing running time and storage requirements. A particularly difficult system of nine nonlinear state variable equations was chosen as a test problem for analyzing and improving the controller. Lengthy analysis, modeling, computing and optimization were accomplished. A remote interactive teletype terminal was installed. Analysis requiring computer usage of short duration was accomplished using Tuskegee's VAX 11/750 system.
i3Drefine Software for Protein 3D Structure Refinement and Its Assessment in CASP10
Bhattacharya, Debswapna; Cheng, Jianlin
2013-01-01
Protein structure refinement refers to the process of improving the qualities of protein structures during structure modeling processes to bring them closer to their native states. Structure refinement has been drawing increasing attention in the community-wide Critical Assessment of techniques for Protein Structure prediction (CASP) experiments since its addition in 8th CASP experiment. During the 9th and recently concluded 10th CASP experiments, a consistent growth in number of refinement targets and participating groups has been witnessed. Yet, protein structure refinement still remains a largely unsolved problem with majority of participating groups in CASP refinement category failed to consistently improve the quality of structures issued for refinement. In order to alleviate this need, we developed a completely automated and computationally efficient protein 3D structure refinement method, i3Drefine, based on an iterative and highly convergent energy minimization algorithm with a powerful all-atom composite physics and knowledge-based force fields and hydrogen bonding (HB) network optimization technique. In the recent community-wide blind experiment, CASP10, i3Drefine (as ‘MULTICOM-CONSTRUCT’) was ranked as the best method in the server section as per the official assessment of CASP10 experiment. Here we provide the community with free access to i3Drefine software and systematically analyse the performance of i3Drefine in strict blind mode on the refinement targets issued in CASP10 refinement category and compare with other state-of-the-art refinement methods participating in CASP10. Our analysis demonstrates that i3Drefine is only fully-automated server participating in CASP10 exhibiting consistent improvement over the initial structures in both global and local structural quality metrics. Executable version of i3Drefine is freely available at http://protein.rnet.missouri.edu/i3drefine/. PMID:23894517
NASA Technical Reports Server (NTRS)
Dunbar, D. N.; Tunnah, B. G.
1979-01-01
Program predicts production volumes of petroleum refinery products, with particular emphasis on aircraft-turbine fuel blends and their key properties. It calculates capital and operating costs for refinery and its margin of profitability. Program also includes provisions for processing of synthetic crude oils from oil shale and coal liquefaction processes and contains highly-detailed blending computations for alternative jet-fuel blends of varying endpoint specifications.
NASA Technical Reports Server (NTRS)
Bonhaus, Daryl L.; Maddalon, Dal V.
1998-01-01
Flight-measured high Reynolds number turbulent-flow pressure distributions on a transport wing in transonic flow are compared to unstructured-grid calculations to assess the predictive ability of a three-dimensional Euler code (USM3D) coupled to an interacting boundary layer module. The two experimental pressure distributions selected for comparative analysis with the calculations are complex and turbulent but typical of an advanced technology laminar flow wing. An advancing front method (VGRID) was used to generate several tetrahedral grids for each test case. Initial calculations left considerable room for improvement in accuracy. Studies were then made of experimental errors, transition location, viscous effects, nacelle flow modeling, number and placement of spanwise boundary layer stations, and grid resolution. The most significant improvements in the accuracy of the calculations were gained by improvement of the nacelle flow model and by refinement of the computational grid. Final calculations yield results in close agreement with the experiment. Indications are that further grid refinement would produce additional improvement but would require more computer memory than is available. The appendix data compare the experimental attachment line location with calculations for different grid sizes. Good agreement is obtained between the experimental and calculated attachment line locations.
Numerical Simulations For the F-16XL Aircraft Configuration
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa A.; Abdol-Hamid, Khaled; Cavallo, Peter A.; Parlette, Edward B.
2014-01-01
Numerical simulations of flow around the F-16XL are presented as a contribution to the Cranked Arrow Wing Aerodynamic Project International II (CAWAPI-II). The NASA Tetrahedral Unstructured Software System (TetrUSS) is used to perform numerical simulations. This CFD suite, developed and maintained by NASA Langley Research Center, includes an unstructured grid generation program called VGRID, a postprocessor named POSTGRID, and the flow solver USM3D. The CRISP CFD package is utilized to provide error estimates and grid adaption for verification of USM3D results. A subsonic high angle-of-attack case flight condition (FC) 25 is computed and analyzed. Three turbulence models are used in the calculations: the one-equation Spalart-Allmaras (SA), the two-equation shear stress transport (SST) and the ke turbulence models. Computational results, and surface static pressure profiles are presented and compared with flight data. Solution verification is performed using formal grid refinement studies, the solution of Error Transport Equations, and adaptive mesh refinement. The current study shows that the USM3D solver coupled with CRISP CFD can be used in an engineering environment in predicting vortex-flow physics on a complex configuration at flight Reynolds numbers.
Computer model to simulate testing at the National Transonic Facility
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.
1995-01-01
A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Modiano, David; Colella, Phillip
1994-01-01
A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.
NASA Astrophysics Data System (ADS)
de Zelicourt, Diane; Ge, Liang; Sotiropoulos, Fotis; Yoganathan, Ajit
2008-11-01
Image-guided computational fluid dynamics has recently gained attention as a tool for predicting the outcome of different surgical scenarios. Cartesian Immersed-Boundary methods constitute an attractive option to tackle the complexity of real-life anatomies. However, when such methods are applied to the branching, multi-vessel configurations typically encountered in cardiovascular anatomies the majority of the grid nodes of the background Cartesian mesh end up lying outside the computational domain, increasing the memory and computational overhead without enhancing the numerical resolution in the region of interest. To remedy this situation, the method presented here superimposes local mesh refinement onto an unstructured Cartesian grid formulation. A baseline unstructured Cartesian mesh is generated by eliminating all nodes that reside in the exterior of the flow domain from the grid structure, and is locally refined in the vicinity of the immersed-boundary. The potential of the method is demonstrated by carrying out systematic mesh refinement studies for internal flow problems ranging in complexity from a 90 deg pipe bend to an actual, patient-specific anatomy reconstructed from magnetic resonance.
An object-oriented approach for parallel self adaptive mesh refinement on block structured grids
NASA Technical Reports Server (NTRS)
Lemke, Max; Witsch, Kristian; Quinlan, Daniel
1993-01-01
Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.
Steindl, Theodora M; Crump, Carolyn E; Hayden, Frederick G; Langer, Thierry
2005-10-06
The development and application of a sophisticated virtual screening and selection protocol to identify potential, novel inhibitors of the human rhinovirus coat protein employing various computer-assisted strategies are described. A large commercially available database of compounds was screened using a highly selective, structure-based pharmacophore model generated with the program Catalyst. A docking study and a principal component analysis were carried out within the software package Cerius and served to validate and further refine the obtained results. These combined efforts led to the selection of six candidate structures, for which in vitro anti-rhinoviral activity could be shown in a biological assay.
Close to real life. [solving for transonic flow about lifting airfoils using supercomputers
NASA Technical Reports Server (NTRS)
Peterson, Victor L.; Bailey, F. Ron
1988-01-01
NASA's Numerical Aerodynamic Simulation (NAS) facility for CFD modeling of highly complex aerodynamic flows employs as its basic hardware two Cray-2s, an ETA-10 Model Q, an Amdahl 5880 mainframe computer that furnishes both support processing and access to 300 Gbytes of disk storage, several minicomputers and superminicomputers, and a Thinking Machines 16,000-device 'connection machine' processor. NAS, which was the first supercomputer facility to standardize operating-system and communication software on all processors, has done important Space Shuttle aerodynamics simulations and will be critical to the configurational refinement of the National Aerospace Plane and its intergrated powerplant, which will involve complex, high temperature reactive gasdynamic computations.
Lemkul, Justin A; MacKerell, Alexander D
2017-05-09
Empirical force fields seek to relate the configuration of a set of atoms to its energy, thus yielding the forces governing its dynamics, using classical physics rather than more expensive quantum mechanical calculations that are computationally intractable for large systems. Most force fields used to simulate biomolecular systems use fixed atomic partial charges, neglecting the influence of electronic polarization, instead making use of a mean-field approximation that may not be transferable across environments. Recent hardware and software developments make polarizable simulations feasible, and to this end, polarizable force fields represent the next generation of molecular dynamics simulation technology. In this work, we describe the refinement of a polarizable force field for DNA based on the classical Drude oscillator model by targeting quantum mechanical interaction energies and conformational energy profiles of model compounds necessary to build a complete DNA force field. The parametrization strategy employed in the present work seeks to correct weak base stacking in A- and B-DNA and the unwinding of Z-DNA observed in the previous version of the force field, called Drude-2013. Refinement of base nonbonded terms and reparametrization of dihedral terms in the glycosidic linkage, deoxyribofuranose rings, and important backbone torsions resulted in improved agreement with quantum mechanical potential energy surfaces. Notably, we expand on previous efforts by explicitly including Z-DNA conformational energetics in the refinement.
NASA Astrophysics Data System (ADS)
Zhang, Weidong; Liu, Jiamin; Yao, Jianhua; Summers, Ronald M.
2013-03-01
Segmentation of the musculature is very important for accurate organ segmentation, analysis of body composition, and localization of tumors in the muscle. In research fields of computer assisted surgery and computer-aided diagnosis (CAD), muscle segmentation in CT images is a necessary pre-processing step. This task is particularly challenging due to the large variability in muscle structure and the overlap in intensity between muscle and internal organs. This problem has not been solved completely, especially for all of thoracic, abdominal and pelvic regions. We propose an automated system to segment the musculature on CT scans. The method combines an atlas-based model, an active contour model and prior segmentation of fat and bones. First, body contour, fat and bones are segmented using existing methods. Second, atlas-based models are pre-defined using anatomic knowledge at multiple key positions in the body to handle the large variability in muscle shape. Third, the atlas model is refined using active contour models (ACM) that are constrained using the pre-segmented bone and fat. Before refining using ACM, the initialized atlas model of next slice is updated using previous atlas. The muscle is segmented using threshold and smoothed in 3D volume space. Thoracic, abdominal and pelvic CT scans were used to evaluate our method, and five key position slices for each case were selected and manually labeled as the reference. Compared with the reference ground truth, the overlap ratio of true positives is 91.1%+/-3.5%, and that of false positives is 5.5%+/-4.2%.
Biological modelling of a computational spiking neural network with neuronal avalanches.
Li, Xiumin; Chen, Qing; Xue, Fangzheng
2017-06-28
In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance.This article is part of the themed issue 'Mathematical methods in medicine: neuroscience, cardiology and pathology'. © 2017 The Author(s).
Biological modelling of a computational spiking neural network with neuronal avalanches
NASA Astrophysics Data System (ADS)
Li, Xiumin; Chen, Qing; Xue, Fangzheng
2017-05-01
In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.
Predicting X-ray diffuse scattering from translation–libration–screw structural ensembles
Van Benschoten, Andrew H.; Afonine, Pavel V.; Terwilliger, Thomas C.; Wall, Michael E.; Jackson, Colin J.; Sauter, Nicholas K.; Adams, Paul D.; Urzhumtsev, Alexandre; Fraser, James S.
2015-01-01
Identifying the intramolecular motions of proteins and nucleic acids is a major challenge in macromolecular X-ray crystallography. Because Bragg diffraction describes the average positional distribution of crystalline atoms with imperfect precision, the resulting electron density can be compatible with multiple models of motion. Diffuse X-ray scattering can reduce this degeneracy by reporting on correlated atomic displacements. Although recent technological advances are increasing the potential to accurately measure diffuse scattering, computational modeling and validation tools are still needed to quantify the agreement between experimental data and different parameterizations of crystalline disorder. A new tool, phenix.diffuse, addresses this need by employing Guinier’s equation to calculate diffuse scattering from Protein Data Bank (PDB)-formatted structural ensembles. As an example case, phenix.diffuse is applied to translation–libration–screw (TLS) refinement, which models rigid-body displacement for segments of the macromolecule. To enable the calculation of diffuse scattering from TLS-refined structures, phenix.tls_as_xyz builds multi-model PDB files that sample the underlying T, L and S tensors. In the glycerophosphodiesterase GpdQ, alternative TLS-group partitioning and different motional correlations between groups yield markedly dissimilar diffuse scattering maps with distinct implications for molecular mechanism and allostery. These methods demonstrate how, in principle, X-ray diffuse scattering could extend macromolecular structural refinement, validation and analysis. PMID:26249347
Predicting X-ray diffuse scattering from translation–libration–screw structural ensembles
Van Benschoten, Andrew H.; Afonine, Pavel V.; Terwilliger, Thomas C.; ...
2015-07-28
Identifying the intramolecular motions of proteins and nucleic acids is a major challenge in macromolecular X-ray crystallography. Because Bragg diffraction describes the average positional distribution of crystalline atoms with imperfect precision, the resulting electron density can be compatible with multiple models of motion. Diffuse X-ray scattering can reduce this degeneracy by reporting on correlated atomic displacements. Although recent technological advances are increasing the potential to accurately measure diffuse scattering, computational modeling and validation tools are still needed to quantify the agreement between experimental data and different parameterizations of crystalline disorder. A new tool, phenix.diffuse, addresses this need by employing Guinier'smore » equation to calculate diffuse scattering from Protein Data Bank (PDB)-formatted structural ensembles. As an example case, phenix.diffuse is applied to translation–libration–screw (TLS) refinement, which models rigid-body displacement for segments of the macromolecule. To enable the calculation of diffuse scattering from TLS-refined structures, phenix.tls_as_xyz builds multi-model PDB files that sample the underlying T, L and S tensors. In the glycerophosphodiesterase GpdQ, alternative TLS-group partitioning and different motional correlations between groups yield markedly dissimilar diffuse scattering maps with distinct implications for molecular mechanism and allostery. In addition, these methods demonstrate how, in principle, X-ray diffuse scattering could extend macromolecular structural refinement, validation and analysis.« less
An Automatic Registration Algorithm for 3D Maxillofacial Model
NASA Astrophysics Data System (ADS)
Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng
2016-09-01
3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.
Johnson, Christie
2016-01-01
This poster presentation presents a content modeling strategy using the SNOMED CT Observable Model to represent large amounts of detailed clinical data in a consistent and computable manner that can support multiple use cases. Lightweight Expression of Granular Objects (LEGOs) represent question/answer pairs on clinical data collection forms, where a question is modeled by a (usually) post-coordinated SNOMED CT expression. LEGOs transform electronic patient data into a normalized consumable, which means that the expressions can be treated as extensions of the SNOMED CT hierarchies for the purpose of performing subsumption queries and other analytics. Utilizing the LEGO approach for modeling clinical data obtained from a nursing admission assessment provides a foundation for data exchange across disparate information systems and software applications. Clinical data exchange of computable LEGO patient information enables the development of more refined data analytics, data storage and clinical decision support.
An adaptive method for a model of two-phase reactive flow on overlapping grids
NASA Astrophysics Data System (ADS)
Schwendeman, D. W.
2008-11-01
A two-phase model of heterogeneous explosives is handled computationally by a new numerical approach that is a modification of the standard Godunov scheme. The approach generates well-resolved and accurate solutions using adaptive mesh refinement on overlapping grids, and treats rationally the nozzling terms that render the otherwise hyperbolic model incapable of a conservative representation. The evolution and structure of detonation waves for a variety of one and two-dimensional configurations will be discussed with a focus given to problems of detonation diffraction and failure.
Unstructured Euler flow solutions using hexahedral cell refinement
NASA Technical Reports Server (NTRS)
Melton, John E.; Cappuccio, Gelsomina; Thomas, Scott D.
1991-01-01
An attempt is made to extend grid refinement into three dimensions by using unstructured hexahedral grids. The flow solver is developed using the TIGER (topologically Independent Grid, Euler Refinement) as the starting point. The program uses an unstructured hexahedral mesh and a modified version of the Jameson four-stage, finite-volume Runge-Kutta algorithm for integration of the Euler equations. The unstructured mesh allows for local refinement appropriate for each freestream condition, thereby concentrating mesh cells in the regions of greatest interest. This increases the computational efficiency because the refinement is not required to extend throughout the entire flow field.
26 CFR 1.199-1 - Income attributable to domestic production activities.
Code of Federal Regulations, 2013 CFR
2013-04-01
... under section 199 is not taken into account in computing any net operating loss or the amount of any net... from the sale of gasoline refined by X within the United States and the sale of refined gasoline that X... gasoline refined by X within the United States (that qualify as DPGR if all the other requirements of § 1...
26 CFR 1.199-1 - Income attributable to domestic production activities.
Code of Federal Regulations, 2011 CFR
2011-04-01
... under section 199 is not taken into account in computing any net operating loss or the amount of any net... from the sale of gasoline refined by X within the United States and the sale of refined gasoline that X... gasoline refined by X within the United States (that qualify as DPGR if all the other requirements of § 1...
26 CFR 1.199-1 - Income attributable to domestic production activities.
Code of Federal Regulations, 2010 CFR
2010-04-01
... under section 199 is not taken into account in computing any net operating loss or the amount of any net... from the sale of gasoline refined by X within the United States and the sale of refined gasoline that X... gasoline refined by X within the United States (that qualify as DPGR if all the other requirements of § 1...
26 CFR 1.199-1 - Income attributable to domestic production activities.
Code of Federal Regulations, 2014 CFR
2014-04-01
... under section 199 is not taken into account in computing any net operating loss or the amount of any net... from the sale of gasoline refined by X within the United States and the sale of refined gasoline that X... gasoline refined by X within the United States (that qualify as DPGR if all the other requirements of § 1...
Three-dimensional elliptic grid generation for an F-16
NASA Technical Reports Server (NTRS)
Sorenson, Reese L.
1988-01-01
A case history depicting the effort to generate a computational grid for the simulation of transonic flow about an F-16 aircraft at realistic flight conditions is presented. The flow solver for which this grid is designed is a zonal one, using the Reynolds averaged Navier-Stokes equations near the surface of the aircraft, and the Euler equations in regions removed from the aircraft. A body conforming global grid, suitable for the Euler equation, is first generated using 3-D Poisson equations having inhomogeneous terms modeled after the 2-D GRAPE code. Regions of the global grid are then designated for zonal refinement as appropriate to accurately model the flow physics. Grid spacing suitable for solution of the Navier-Stokes equations is generated in the refinement zones by simple subdivision of the given coarse grid intervals. That grid generation project is described, with particular emphasis on the global coarse grid.
Detonation Product EOS Studies: Using ISLS to Refine Cheetah
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zaug, J M; Howard, W M; Fried, L E
2001-08-08
Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a simple fluid, methanol. Impulsive Stimulated Light Scattering (ISLS) conducted on diamond-anvil cell (DAC) encapsulated samples offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition themore » kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model Cheetah. Computational models are systematically improved with each addition of experimental data.« less
Tractable Experiment Design via Mathematical Surrogates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
This presentation summarizes the development and implementation of quantitative design criteria motivated by targeted inference objectives for identifying new, potentially expensive computational or physical experiments. The first application is concerned with estimating features of quantities of interest arising from complex computational models, such as quantiles or failure probabilities. A sequential strategy is proposed for iterative refinement of the importance distributions used to efficiently sample the uncertain inputs to the computational model. In the second application, effective use of mathematical surrogates is investigated to help alleviate the analytical and numerical intractability often associated with Bayesian experiment design. This approach allows formore » the incorporation of prior information into the design process without the need for gross simplification of the design criterion. Illustrative examples of both design problems will be presented as an argument for the relevance of these research problems.« less
Numerical simulation of h-adaptive immersed boundary method for freely falling disks
NASA Astrophysics Data System (ADS)
Zhang, Pan; Xia, Zhenhua; Cai, Qingdong
2018-05-01
In this work, a freely falling disk with aspect ratio 1/10 is directly simulated by using an adaptive numerical model implemented on a parallel computation framework JASMIN. The adaptive numerical model is a combination of the h-adaptive mesh refinement technique and the implicit immersed boundary method (IBM). Our numerical results agree well with the experimental results in all of the six degrees of freedom of the disk. Furthermore, very similar vortex structures observed in the experiment were also obtained.
1986-08-01
publication by Ms. Jessica S. Ruff, Information Products Division, WES. This manual is published in loose-leaf format for convenience in ." ._ periodic...transfer computations. m. Variety of output options. Background 8. This manual is a product of a program of evaluation and refinement of mathematical water...zooplankton and higher order herbivores. However, these groups are presently not included in the model. Macrophyte production may also have an impact upon
Snitkin, Evan S; Dudley, Aimée M; Janse, Daniel M; Wong, Kaisheen; Church, George M; Segrè, Daniel
2008-01-01
Background Understanding the response of complex biochemical networks to genetic perturbations and environmental variability is a fundamental challenge in biology. Integration of high-throughput experimental assays and genome-scale computational methods is likely to produce insight otherwise unreachable, but specific examples of such integration have only begun to be explored. Results In this study, we measured growth phenotypes of 465 Saccharomyces cerevisiae gene deletion mutants under 16 metabolically relevant conditions and integrated them with the corresponding flux balance model predictions. We first used discordance between experimental results and model predictions to guide a stage of experimental refinement, which resulted in a significant improvement in the quality of the experimental data. Next, we used discordance still present in the refined experimental data to assess the reliability of yeast metabolism models under different conditions. In addition to estimating predictive capacity based on growth phenotypes, we sought to explain these discordances by examining predicted flux distributions visualized through a new, freely available platform. This analysis led to insight into the glycerol utilization pathway and the potential effects of metabolic shortcuts on model results. Finally, we used model predictions and experimental data to discriminate between alternative raffinose catabolism routes. Conclusions Our study demonstrates how a new level of integration between high throughput measurements and flux balance model predictions can improve understanding of both experimental and computational results. The added value of a joint analysis is a more reliable platform for specific testing of biological hypotheses, such as the catabolic routes of different carbon sources. PMID:18808699
SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Ruan, D
2015-06-15
Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is firstmore » roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit in both complexity and performance is expected to be most pronounced with large-scale heterogeneous data.« less
Verification of an analytic modeler for capillary pump loop thermal control systems
NASA Technical Reports Server (NTRS)
Schweickart, R. B.; Neiswanger, L.; Ku, J.
1987-01-01
A number of computer programs have been written to model two-phase heat transfer systems for space use. These programs support the design of thermal control systems and provide a method of predicting their performance in the wide range of thermal environments of space. Predicting the performance of one such system known as the capillary pump loop (CPL) is the intent of the CPL Modeler. By modeling two developed CPL systems and comparing the results with actual test data, the CPL Modeler has proven useful in simulating CPL operation. Results of the modeling effort are discussed, together with plans for refinements to the modeler.
Turbulence Modeling: Progress and Future Outlook
NASA Technical Reports Server (NTRS)
Marvin, Joseph G.; Huang, George P.
1996-01-01
Progress in the development of the hierarchy of turbulence models for Reynolds-averaged Navier-Stokes codes used in aerodynamic applications is reviewed. Steady progress is demonstrated, but transfer of the modeling technology has not kept pace with the development and demands of the computational fluid dynamics (CFD) tools. An examination of the process of model development leads to recommendations for a mid-course correction involving close coordination between modelers, CFD developers, and application engineers. In instances where the old process is changed and cooperation enhanced, timely transfer is realized. A turbulence modeling information database is proposed to refine the process and open it to greater participation among modeling and CFD practitioners.
COMET-AR User's Manual: COmputational MEchanics Testbed with Adaptive Refinement
NASA Technical Reports Server (NTRS)
Moas, E. (Editor)
1997-01-01
The COMET-AR User's Manual provides a reference manual for the Computational Structural Mechanics Testbed with Adaptive Refinement (COMET-AR), a software system developed jointly by Lockheed Palo Alto Research Laboratory and NASA Langley Research Center under contract NAS1-18444. The COMET-AR system is an extended version of an earlier finite element based structural analysis system called COMET, also developed by Lockheed and NASA. The primary extensions are the adaptive mesh refinement capabilities and a new "object-like" database interface that makes COMET-AR easier to extend further. This User's Manual provides a detailed description of the user interface to COMET-AR from the viewpoint of a structural analyst.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Vatsa, Veer N.; Atkins, Harold L.
2005-01-01
We apply an unsteady Reynolds-averaged Navier-Stokes (URANS) solver for unstructured grids to unsteady flows on moving and stationary grids. Example problems considered are relevant to active flow control and stability and control. Computational results are presented using the Spalart-Allmaras turbulence model and are compared to experimental data. The effect of grid and time-step refinement are examined.
Mechanism test bed. Flexible body model report
NASA Technical Reports Server (NTRS)
Compton, Jimmy
1991-01-01
The Space Station Mechanism Test Bed is a six degree-of-freedom motion simulation facility used to evaluate docking and berthing hardware mechanisms. A generalized rigid body math model was developed which allowed the computation of vehicle relative motion in six DOF due to forces and moments from mechanism contact, attitude control systems, and gravity. No vehicle size limitations were imposed in the model. The equations of motion were based on Hill's equations for translational motion with respect to a nominal circular earth orbit and Newton-Euler equations for rotational motion. This rigid body model and supporting software were being refined.
On some variational acceleration techniques and related methods for local refinement
NASA Astrophysics Data System (ADS)
Teigland, Rune
1998-10-01
This paper shows that the well-known variational acceleration method described by Wachspress (E. Wachspress, Iterative Solution of Elliptic Systems and Applications to the Neutron Diffusion Equations of Reactor Physics, Prentice-Hall, Englewood Cliffs, NJ, 1966) and later generalized to multilevels (known as the additive correction multigrid method (B.R Huthchinson and G.D. Raithby, Numer. Heat Transf., 9, 511-537 (1986))) is similar to the FAC method of McCormick and Thomas (S.F McCormick and J.W. Thomas, Math. Comput., 46, 439-456 (1986)) and related multilevel methods. The performance of the method is demonstrated for some simple model problems using local refinement and suggestions for improving the performance of the method are given.
Polder maps: Improving OMIT maps by excluding bulk solvent
Liebschner, Dorothee; Afonine, Pavel V.; Moriarty, Nigel W.; ...
2017-02-01
The crystallographic maps that are routinely used during the structure-solution workflow are almost always model-biased because model information is used for their calculation. As these maps are also used to validate the atomic models that result from model building and refinement, this constitutes an immediate problem: anything added to the model will manifest itself in the map and thus hinder the validation. OMIT maps are a common tool to verify the presence of atoms in the model. The simplest way to compute an OMIT map is to exclude the atoms in question from the structure, update the corresponding structure factorsmore » and compute a residual map. It is then expected that if these atoms are present in the crystal structure, the electron density for the omitted atoms will be seen as positive features in this map. This, however, is complicated by the flat bulk-solvent model which is almost universally used in modern crystallographic refinement programs. This model postulates constant electron density at any voxel of the unit-cell volume that is not occupied by the atomic model. Consequently, if the density arising from the omitted atoms is weak then the bulk-solvent model may obscure it further. A possible solution to this problem is to prevent bulk solvent from entering the selected OMIT regions, which may improve the interpretative power of residual maps. This approach is called a polder (OMIT) map. Polder OMIT maps can be particularly useful for displaying weak densities of ligands, solvent molecules, side chains, alternative conformations and residues both in terminal regions and in loops. As a result, the tools described in this manuscript have been implemented and are available in PHENIX.« less
Zgarbová, Marie; Luque, F. Javier; Šponer, Jiří; Cheatham, Thomas E.; Otyepka, Michal; Jurečka, Petr
2013-01-01
We present a refinement of the backbone torsion parameters ε and ζ of the Cornell et al. AMBER force field for DNA simulations. The new parameters, denoted as εζOL1, were derived from quantum-mechanical calculations with inclusion of conformation-dependent solvation effects according to the recently reported methodology (J. Chem. Theory Comput. 2012, 7(9), 2886-2902). The performance of the refined parameters was analyzed by means of extended molecular dynamics (MD) simulations for several representative systems. The results showed that the εζOL1 refinement improves the backbone description of B-DNA double helices and G-DNA stem. In B-DNA simulations, we observed an average increase of the helical twist and narrowing of the major groove, thus achieving better agreement with X-ray and solution NMR data. The balance between populations of BI and BII backbone substates was shifted towards the BII state, in better agreement with ensemble-refined solution experimental results. Furthermore, the refined parameters decreased the backbone RMS deviations in B-DNA MD simulations. In the antiparallel guanine quadruplex (G-DNA) the εζOL1 modification improved the description of non-canonical α/γ backbone substates, which were shown to be coupled to the ε/ζ torsion potential. Thus, the refinement is suggested as a possible alternative to the current ε/ζ torsion potential, which may enable more accurate modeling of nucleic acids. However, long-term testing is recommended before its routine application in DNA simulations. PMID:24058302
NASA Technical Reports Server (NTRS)
Young, David P.; Melvin, Robin G.; Bieterman, Michael B.; Johnson, Forrester T.; Samant, Satish S.
1991-01-01
The present FEM technique addresses both linear and nonlinear boundary value problems encountered in computational physics by handling general three-dimensional regions, boundary conditions, and material properties. The box finite elements used are defined by a Cartesian grid independent of the boundary definition, and local refinements proceed by dividing a given box element into eight subelements. Discretization employs trilinear approximations on the box elements; special element stiffness matrices are included for boxes cut by any boundary surface. Illustrative results are presented for representative aerodynamics problems involving up to 400,000 elements.
Emerging approaches in predictive toxicology.
Zhang, Luoping; McHale, Cliona M; Greene, Nigel; Snyder, Ronald D; Rich, Ivan N; Aardema, Marilyn J; Roy, Shambhu; Pfuhler, Stefan; Venkatactahalam, Sundaresan
2014-12-01
Predictive toxicology plays an important role in the assessment of toxicity of chemicals and the drug development process. While there are several well-established in vitro and in vivo assays that are suitable for predictive toxicology, recent advances in high-throughput analytical technologies and model systems are expected to have a major impact on the field of predictive toxicology. This commentary provides an overview of the state of the current science and a brief discussion on future perspectives for the field of predictive toxicology for human toxicity. Computational models for predictive toxicology, needs for further refinement and obstacles to expand computational models to include additional classes of chemical compounds are highlighted. Functional and comparative genomics approaches in predictive toxicology are discussed with an emphasis on successful utilization of recently developed model systems for high-throughput analysis. The advantages of three-dimensional model systems and stem cells and their use in predictive toxicology testing are also described. © 2014 Wiley Periodicals, Inc.
Emerging Approaches in Predictive Toxicology
Zhang, Luoping; McHale, Cliona M.; Greene, Nigel; Snyder, Ronald D.; Rich, Ivan N.; Aardema, Marilyn J.; Roy, Shambhu; Pfuhler, Stefan; Venkatactahalam, Sundaresan
2016-01-01
Predictive toxicology plays an important role in the assessment of toxicity of chemicals and the drug development process. While there are several well-established in vitro and in vivo assays that are suitable for predictive toxicology, recent advances in high-throughput analytical technologies and model systems are expected to have a major impact on the field of predictive toxicology. This commentary provides an overview of the state of the current science and a brief discussion on future perspectives for the field of predictive toxicology for human toxicity. Computational models for predictive toxicology, needs for further refinement and obstacles to expand computational models to include additional classes of chemical compounds are highlighted. Functional and comparative genomics approaches in predictive toxicology are discussed with an emphasis on successful utilization of recently developed model systems for high-throughput analysis. The advantages of three-dimensional model systems and stem cells and their use in predictive toxicology testing are also described. PMID:25044351
Neuert, Mark A C; Dunning, Cynthia E
2013-09-01
Strain energy-based adaptive material models are used to predict bone resorption resulting from stress shielding induced by prosthetic joint implants. Generally, such models are governed by two key parameters: a homeostatic strain-energy state (K) and a threshold deviation from this state required to initiate bone reformation (s). A refinement procedure has been performed to estimate these parameters in the femur and glenoid; this study investigates the specific influences of these parameters on resulting density distributions in the distal ulna. A finite element model of a human ulna was created using micro-computed tomography (µCT) data, initialized to a homogeneous density distribution, and subjected to approximate in vivo loading. Values for K and s were tested, and the resulting steady-state density distribution compared with values derived from µCT images. The sensitivity of these parameters to initial conditions was examined by altering the initial homogeneous density value. The refined model parameters selected were then applied to six additional human ulnae to determine their performance across individuals. Model accuracy using the refined parameters was found to be comparable with that found in previous studies of the glenoid and femur, and gross bone structures, such as the cortical shell and medullary canal, were reproduced. The model was found to be insensitive to initial conditions; however, a fair degree of variation was observed between the six specimens. This work represents an important contribution to the study of changes in load transfer in the distal ulna following the implementation of commercial orthopedic implants.
Links, Jonathan M; Schwartz, Brian S; Lin, Sen; Kanarek, Norma; Mitrani-Reiser, Judith; Sell, Tara Kirk; Watson, Crystal R; Ward, Doug; Slemp, Cathy; Burhans, Robert; Gill, Kimberly; Igusa, Tak; Zhao, Xilei; Aguirre, Benigno; Trainor, Joseph; Nigg, Joanne; Inglesby, Thomas; Carbone, Eric; Kendra, James M
2018-02-01
Policy-makers and practitioners have a need to assess community resilience in disasters. Prior efforts conflated resilience with community functioning, combined resistance and recovery (the components of resilience), and relied on a static model for what is inherently a dynamic process. We sought to develop linked conceptual and computational models of community functioning and resilience after a disaster. We developed a system dynamics computational model that predicts community functioning after a disaster. The computational model outputted the time course of community functioning before, during, and after a disaster, which was used to calculate resistance, recovery, and resilience for all US counties. The conceptual model explicitly separated resilience from community functioning and identified all key components for each, which were translated into a system dynamics computational model with connections and feedbacks. The components were represented by publicly available measures at the county level. Baseline community functioning, resistance, recovery, and resilience evidenced a range of values and geographic clustering, consistent with hypotheses based on the disaster literature. The work is transparent, motivates ongoing refinements, and identifies areas for improved measurements. After validation, such a model can be used to identify effective investments to enhance community resilience. (Disaster Med Public Health Preparedness. 2018;12:127-137).
Park, HaJeung; González, Àlex L; Yildirim, Ilyas; Tran, Tuan; Lohman, Jeremy R; Fang, Pengfei; Guo, Min; Disney, Matthew D
2015-06-23
Spinocerebellar ataxia type 10 (SCA10) is caused by a pentanucleotide repeat expansion of r(AUUCU) within intron 9 of the ATXN10 pre-mRNA. The RNA causes disease by a gain-of-function mechanism in which it inactivates proteins involved in RNA biogenesis. Spectroscopic studies showed that r(AUUCU) repeats form a hairpin structure; however, there were no high-resolution structural models prior to this work. Herein, we report the first crystal structure of model r(AUUCU) repeats refined to 2.8 Å and analysis of the structure via molecular dynamics simulations. The r(AUUCU) tracts adopt an overall A-form geometry in which 3 × 3 nucleotide (5')UCU(3')/(3')UCU(5') internal loops are closed by AU pairs. Helical parameters of the refined structure as well as the corresponding electron density map on the crystallographic model reflect dynamic features of the internal loop. The computational analyses captured dynamic motion of the loop closing pairs, which can form single-stranded conformations with relatively low energies. Overall, the results presented here suggest the possibility for r(AUUCU) repeats to form metastable A-from structures, which can rearrange into single-stranded conformations and attract proteins such as heterogeneous nuclear ribonucleoprotein K (hnRNP K). The information presented here may aid in the rational design of therapeutics targeting this RNA.
Park, HaJeung; González, Àlex L.; Yildirim, Ilyas; Tran, Tuan; Lohman, Jeremy R.; Fang, Pengfei; Guo, Min; Disney, Matthew D.
2016-01-01
Spinocerebellar ataxia type 10 (SCA10) is caused by a pentanucleotide repeat expansion of r(AUUCU) within intron 9 of the ATXN10 pre-mRNA. The RNA causes disease by a gain-of-function mechanism in which it inactivates proteins involved in RNA biogenesis. Spectroscopic studies showed that r(AUUCU) repeats form a hairpin structure; however, there were no high-resolution structural models prior to this work. Herein, we report the first crystal structure of model r(AUUCU) repeats refined to 2.8 Å and analysis of the structure via molecular dynamics simulations. The r(AUUCU) tracts adopt an overall A-form geometry in which 3 × 3 nucleotide 5′UCU3′/3′UCU5′ internal loops are closed by AU pairs. Helical parameters of the refined structure as well as the corresponding electron density map on the crystallographic model reflect dynamic features of the internal loop. The computational analyses captured dynamic motion of the loop closing pairs, which can form single-stranded conformations with relatively low energies. Overall, the results presented here suggest the possibility for r(AUUCU) repeats to form metastable A-from structures, which can rearrange into single-stranded conformations and attract proteins such as heterogeneous nuclear ribonucleoprotein K (hnRNP K). The information presented here may aid in the rational design of therapeutics targeting this RNA. PMID:26039897
NASA Astrophysics Data System (ADS)
Becker, Roland; Vexler, Boris
2005-06-01
We consider the calibration of parameters in physical models described by partial differential equations. This task is formulated as a constrained optimization problem with a cost functional of least squares type using information obtained from measurements. An important issue in the numerical solution of this type of problem is the control of the errors introduced, first, by discretization of the equations describing the physical model, and second, by measurement errors or other perturbations. Our strategy is as follows: we suppose that the user defines an interest functional I, which might depend on both the state variable and the parameters and which represents the goal of the computation. First, we propose an a posteriori error estimator which measures the error with respect to this functional. This error estimator is used in an adaptive algorithm to construct economic meshes by local mesh refinement. The proposed estimator requires the solution of an auxiliary linear equation. Second, we address the question of sensitivity. Applying similar techniques as before, we derive quantities which describe the influence of small changes in the measurements on the value of the interest functional. These numbers, which we call relative condition numbers, give additional information on the problem under consideration. They can be computed by means of the solution of the auxiliary problem determined before. Finally, we demonstrate our approach at hand of a parameter calibration problem for a model flow problem.
NASA Astrophysics Data System (ADS)
Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.
2015-12-01
Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.
On macromolecular refinement at subatomic resolution withinteratomic scatterers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.
2007-11-09
A study of the accurate electron density distribution in molecular crystals at subatomic resolution, better than {approx} 1.0 {angstrom}, requires more detailed models than those based on independent spherical atoms. A tool conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 {angstrom}, the number of experimental data is insufficient for the full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark datasets gave results comparable in quality withmore » results of multipolar refinement and superior of those for conventional models. Applications to several datasets of both small- and macro-molecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less
On macromolecular refinement at subatomic resolution with interatomic scatterers
Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.; Lunin, Vladimir Y.; Urzhumtsev, Alexandre
2007-01-01
A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package. PMID:18007035
On macromolecular refinement at subatomic resolution with interatomic scatterers.
Afonine, Pavel V; Grosse-Kunstleve, Ralf W; Adams, Paul D; Lunin, Vladimir Y; Urzhumtsev, Alexandre
2007-11-01
A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than approximately 1.0 A) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 A, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.
Dynamic grid refinement for partial differential equations on parallel computers
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems.
AMMOS2: a web server for protein-ligand-water complexes refinement via molecular mechanics.
Labbé, Céline M; Pencheva, Tania; Jereva, Dessislava; Desvillechabrol, Dimitri; Becot, Jérôme; Villoutreix, Bruno O; Pajeva, Ilza; Miteva, Maria A
2017-07-03
AMMOS2 is an interactive web server for efficient computational refinement of protein-small organic molecule complexes. The AMMOS2 protocol employs atomic-level energy minimization of a large number of experimental or modeled protein-ligand complexes. The web server is based on the previously developed standalone software AMMOS (Automatic Molecular Mechanics Optimization for in silico Screening). AMMOS utilizes the physics-based force field AMMP sp4 and performs optimization of protein-ligand interactions at five levels of flexibility of the protein receptor. The new version 2 of AMMOS implemented in the AMMOS2 web server allows the users to include explicit water molecules and individual metal ions in the protein-ligand complexes during minimization. The web server provides comprehensive analysis of computed energies and interactive visualization of refined protein-ligand complexes. The ligands are ranked by the minimized binding energies allowing the users to perform additional analysis for drug discovery or chemical biology projects. The web server has been extensively tested on 21 diverse protein-ligand complexes. AMMOS2 minimization shows consistent improvement over the initial complex structures in terms of minimized protein-ligand binding energies and water positions optimization. The AMMOS2 web server is freely available without any registration requirement at the URL: http://drugmod.rpbs.univ-paris-diderot.fr/ammosHome.php. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
AMMOS2: a web server for protein–ligand–water complexes refinement via molecular mechanics
Labbé, Céline M.; Pencheva, Tania; Jereva, Dessislava; Desvillechabrol, Dimitri; Becot, Jérôme; Villoutreix, Bruno O.; Pajeva, Ilza
2017-01-01
Abstract AMMOS2 is an interactive web server for efficient computational refinement of protein–small organic molecule complexes. The AMMOS2 protocol employs atomic-level energy minimization of a large number of experimental or modeled protein–ligand complexes. The web server is based on the previously developed standalone software AMMOS (Automatic Molecular Mechanics Optimization for in silico Screening). AMMOS utilizes the physics-based force field AMMP sp4 and performs optimization of protein–ligand interactions at five levels of flexibility of the protein receptor. The new version 2 of AMMOS implemented in the AMMOS2 web server allows the users to include explicit water molecules and individual metal ions in the protein–ligand complexes during minimization. The web server provides comprehensive analysis of computed energies and interactive visualization of refined protein–ligand complexes. The ligands are ranked by the minimized binding energies allowing the users to perform additional analysis for drug discovery or chemical biology projects. The web server has been extensively tested on 21 diverse protein–ligand complexes. AMMOS2 minimization shows consistent improvement over the initial complex structures in terms of minimized protein–ligand binding energies and water positions optimization. The AMMOS2 web server is freely available without any registration requirement at the URL: http://drugmod.rpbs.univ-paris-diderot.fr/ammosHome.php. PMID:28486703
Hardware-Based Non-Optimum Factors for Launch Vehicle Structural Design
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey; Cerro, Jeffrey A.
2010-01-01
During aerospace vehicle conceptual and preliminary design, empirical non-optimum factors are typically applied to predicted structural component weights to account for undefined manufacturing and design details. Non-optimum factors are developed here for 32 aluminum-lithium 2195 orthogrid panels comprising the liquid hydrogen tank barrel of the Space Shuttle External Tank using measured panel weights and manufacturing drawings. Minimum values for skin thickness, axial and circumferential blade stiffener thickness and spacing, and overall panel thickness are used to estimate individual panel weights. Panel non-optimum factors computed using a coarse weights model range from 1.21 to 1.77, and a refined weights model (including weld lands and skin and stiffener transition details) yields non-optimum factors of between 1.02 and 1.54. Acreage panels have an average 1.24 non-optimum factor using the coarse model, and 1.03 with the refined version. The observed consistency of these acreage non-optimum factors suggests that relatively simple models can be used to accurately predict large structural component weights for future launch vehicles.
Determining crystal structures through crowdsourcing and coursework
NASA Astrophysics Data System (ADS)
Horowitz, Scott; Koepnick, Brian; Martin, Raoul; Tymieniecki, Agnes; Winburn, Amanda A.; Cooper, Seth; Flatten, Jeff; Rogawski, David S.; Koropatkin, Nicole M.; Hailu, Tsinatkeab T.; Jain, Neha; Koldewey, Philipp; Ahlstrom, Logan S.; Chapman, Matthew R.; Sikkema, Andrew P.; Skiba, Meredith A.; Maloney, Finn P.; Beinlich, Felix R. M.; Caglar, Ahmet; Coral, Alan; Jensen, Alice Elizabeth; Lubow, Allen; Boitano, Amanda; Lisle, Amy Elizabeth; Maxwell, Andrew T.; Failer, Barb; Kaszubowski, Bartosz; Hrytsiv, Bohdan; Vincenzo, Brancaccio; de Melo Cruz, Breno Renan; McManus, Brian Joseph; Kestemont, Bruno; Vardeman, Carl; Comisky, Casey; Neilson, Catherine; Landers, Catherine R.; Ince, Christopher; Buske, Daniel Jon; Totonjian, Daniel; Copeland, David Marshall; Murray, David; Jagieła, Dawid; Janz, Dietmar; Wheeler, Douglas C.; Cali, Elie; Croze, Emmanuel; Rezae, Farah; Martin, Floyd Orville; Beecher, Gil; de Jong, Guido Alexander; Ykman, Guy; Feldmann, Harald; Chan, Hugo Paul Perez; Kovanecz, Istvan; Vasilchenko, Ivan; Connellan, James C.; Borman, Jami Lynne; Norrgard, Jane; Kanfer, Jebbie; Canfield, Jeffrey M.; Slone, Jesse David; Oh, Jimmy; Mitchell, Joanne; Bishop, John; Kroeger, John Douglas; Schinkler, Jonas; McLaughlin, Joseph; Brownlee, June M.; Bell, Justin; Fellbaum, Karl Willem; Harper, Kathleen; Abbey, Kirk J.; Isaksson, Lennart E.; Wei, Linda; Cummins, Lisa N.; Miller, Lori Anne; Bain, Lyn; Carpenter, Lynn; Desnouck, Maarten; Sharma, Manasa G.; Belcastro, Marcus; Szew, Martin; Szew, Martin; Britton, Matthew; Gaebel, Matthias; Power, Max; Cassidy, Michael; Pfützenreuter, Michael; Minett, Michele; Wesselingh, Michiel; Yi, Minjune; Cameron, Neil Haydn Tormey; Bolibruch, Nicholas I.; Benevides, Noah; Kathleen Kerr, Norah; Barlow, Nova; Crevits, Nykole Krystyne; Dunn, Paul; Silveira Belo Nascimento Roque, Paulo Sergio; Riber, Peter; Pikkanen, Petri; Shehzad, Raafay; Viosca, Randy; James Fraser, Robert; Leduc, Robert; Madala, Roman; Shnider, Scott; de Boisblanc, Sharon; Butkovich, Slava; Bliven, Spencer; Hettler, Stephen; Telehany, Stephen; Schwegmann, Steven A.; Parkes, Steven; Kleinfelter, Susan C.; Michael Holst, Sven; van der Laan, T. J. A.; Bausewein, Thomas; Simon, Vera; Pulley, Warwick; Hull, William; Kim, Annes Yukyung; Lawton, Alexis; Ruesch, Amanda; Sundar, Anjali; Lawrence, Anna-Lisa; Afrin, Antara; Maheshwer, Bhargavi; Turfe, Bilal; Huebner, Christian; Killeen, Courtney Elizabeth; Antebi-Lerrman, Dalia; Luan, Danny; Wolfe, Derek; Pham, Duc; Michewicz, Elaina; Hull, Elizabeth; Pardington, Emily; Galal, Galal Osama; Sun, Grace; Chen, Grace; Anderson, Halie E.; Chang, Jane; Hewlett, Jeffrey Thomas; Sterbenz, Jennifer; Lim, Jiho; Morof, Joshua; Lee, Junho; Inn, Juyoung Samuel; Hahm, Kaitlin; Roth, Kaitlin; Nair, Karun; Markin, Katherine; Schramm, Katie; Toni Eid, Kevin; Gam, Kristina; Murphy, Lisha; Yuan, Lucy; Kana, Lulia; Daboul, Lynn; Shammas, Mario Karam; Chason, Max; Sinan, Moaz; Andrew Tooley, Nicholas; Korakavi, Nisha; Comer, Patrick; Magur, Pragya; Savliwala, Quresh; Davison, Reid Michael; Sankaran, Roshun Rajiv; Lewe, Sam; Tamkus, Saule; Chen, Shirley; Harvey, Sho; Hwang, Sin Ye; Vatsia, Sohrab; Withrow, Stefan; Luther, Tahra K.; Manett, Taylor; Johnson, Thomas James; Ryan Brash, Timothy; Kuhlman, Wyatt; Park, Yeonjung; Popović, Zoran; Baker, David; Khatib, Firas; Bardwell, James C. A.
2016-09-01
We show here that computer game players can build high-quality crystal structures. Introduction of a new feature into the computer game Foldit allows players to build and real-space refine structures into electron density maps. To assess the usefulness of this feature, we held a crystallographic model-building competition between trained crystallographers, undergraduate students, Foldit players and automatic model-building algorithms. After removal of disordered residues, a team of Foldit players achieved the most accurate structure. Analysing the target protein of the competition, YPL067C, uncovered a new family of histidine triad proteins apparently involved in the prevention of amyloid toxicity. From this study, we conclude that crystallographers can utilize crowdsourcing to interpret electron density information and to produce structure solutions of the highest quality.
Determining crystal structures through crowdsourcing and coursework.
Horowitz, Scott; Koepnick, Brian; Martin, Raoul; Tymieniecki, Agnes; Winburn, Amanda A; Cooper, Seth; Flatten, Jeff; Rogawski, David S; Koropatkin, Nicole M; Hailu, Tsinatkeab T; Jain, Neha; Koldewey, Philipp; Ahlstrom, Logan S; Chapman, Matthew R; Sikkema, Andrew P; Skiba, Meredith A; Maloney, Finn P; Beinlich, Felix R M; Popović, Zoran; Baker, David; Khatib, Firas; Bardwell, James C A
2016-09-16
We show here that computer game players can build high-quality crystal structures. Introduction of a new feature into the computer game Foldit allows players to build and real-space refine structures into electron density maps. To assess the usefulness of this feature, we held a crystallographic model-building competition between trained crystallographers, undergraduate students, Foldit players and automatic model-building algorithms. After removal of disordered residues, a team of Foldit players achieved the most accurate structure. Analysing the target protein of the competition, YPL067C, uncovered a new family of histidine triad proteins apparently involved in the prevention of amyloid toxicity. From this study, we conclude that crystallographers can utilize crowdsourcing to interpret electron density information and to produce structure solutions of the highest quality.
NASA Technical Reports Server (NTRS)
Browder, J. A.; May, L. N., Jr.; Rosenthal, A.; Baumann, R. H.; Gosselink, J. G.
1986-01-01
LANDSAT thematic mapper (TM) data are being used to refine and validate a stochastic spatial computer model to be applied to coastal resource management problems in Louisiana. Two major aspects of the research are: (1) the measurement of area of land (or emergent vegetation) and water and the length of the interface between land and water in TM imagery of selected coastal wetlands (sample marshes); and (2) the comparison of spatial patterns of land and water in the sample marshes of the imagery to that in marshes simulated by a computer model. In addition to activities in these two areas, the potential use of a published autocorrelation statistic is analyzed.
The blind leading the blind: Mutual refinement of approximate theories
NASA Technical Reports Server (NTRS)
Kedar, Smadar T.; Bresina, John L.; Dent, C. Lisa
1991-01-01
The mutual refinement theory, a method for refining world models in a reactive system, is described. The method detects failures, explains their causes, and repairs the approximate models which cause the failures. The approach focuses on using one approximate model to refine another.
Using Adaptive Mesh Refinment to Simulate Storm Surge
NASA Astrophysics Data System (ADS)
Mandli, K. T.; Dawson, C.
2012-12-01
Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.
Numerical model for healthy and injured ankle ligaments.
Forestiero, Antonella; Carniel, Emanuele Luigi; Fontanella, Chiara Giulia; Natali, Arturo Nicola
2017-06-01
The aim of this work is to provide a computational tool for the investigation of ankle mechanics under different loading conditions. The attention is focused on the biomechanical role of ankle ligaments that are fundamental for joints stability. A finite element model of the human foot is developed starting from Computed Tomography and Magnetic Resonance Imaging, using particular attention to the definition of ankle ligaments. A refined fiber-reinforced visco-hyperelastic constitutive model is assumed to characterize the mechanical response of ligaments. Numerical analyses that interpret anterior drawer and the talar tilt tests reported in literature are performed. The numerical results are in agreement with the range of values obtained by experimental tests confirming the accuracy of the procedure adopted. The increase of the ankle range of motion after some ligaments rupture is also evaluated, leading to the capability of the numerical models to interpret the damage conditions. The developed computational model provides a tool for the investigation of foot and ankle functionality in terms of stress-strain of the tissues and in terms of ankle motion, considering different types of damage to ankle ligaments.
NASA Technical Reports Server (NTRS)
Sadovsky, Alexander V.; Davis, Damek; Isaacson, Douglas R.
2012-01-01
A class of problems in air traffic management asks for a scheduling algorithm that supplies the air traffic services authority not only with a schedule of arrivals and departures, but also with speed advisories. Since advisories must be finite, a scheduling algorithm must ultimately produce a finite data set, hence must either start with a purely discrete model or involve a discretization of a continuous one. The former choice, often preferred for intuitive clarity, naturally leads to mixed-integer programs, hindering proofs of correctness and computational cost bounds (crucial for real-time operations). In this paper, a hybrid control system is used to model air traffic scheduling, capturing both the discrete and continuous aspects. This framework is applied to a class of problems, called the Fully Routed Nominal Problem. We prove a number of geometric results on feasible schedules and use these results to formulate an algorithm that attempts to compute a collective speed advisory, effectively finite, and has computational cost polynomial in the number of aircraft. This work is a first step toward optimization and models refined with more realistic detail.
NASA Astrophysics Data System (ADS)
Chertkov, Yu B.; Disyuk, V. V.; Pimenov, E. Yu; Aksenova, N. V.
2017-01-01
Within the framework of research in possibility and prospects of power density equalization in boiling water reactors (as exemplified by WB-50) a work was undertaken to improve prior computational model of the WB-50 reactor implemented in MCU-RR software. Analysis of prior works showed that critical state calculations have deviation of calculated reactivity exceeding ±0.3 % (ΔKef/Kef) for minimum concentrations of boric acid in the reactor water and reaching 2 % for maximum concentration values. Axial coefficient of nonuniform burnup distribution reaches high values in the WB-50 reactor. Thus, the computational model needed refinement to take into account burnup inhomogeneity along the fuel assembly height. At this stage, computational results with mean square deviation of less than 0.7 % (ΔKef/Kef) and dispersion of design values of ±1 % (ΔK/K) shall be deemed acceptable. Further lowering of these parameters apparently requires root cause analysis of such large values and paying more attention to experimental measurement techniques.
K-Means Subject Matter Expert Refined Topic Model Methodology
2017-01-01
Refined Topic Model Methodology Topic Model Estimation via K-Means U.S. Army TRADOC Analysis Center-Monterey 700 Dyer Road...January 2017 K-means Subject Matter Expert Refined Topic Model Methodology Topic Model Estimation via K-Means Theodore T. Allen, Ph.D. Zhenhuan...Matter Expert Refined Topic Model Methodology Topic Model Estimation via K-means 5a. CONTRACT NUMBER W9124N-15-P-0022 5b. GRANT NUMBER 5c
Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST/1991
NASA Technical Reports Server (NTRS)
Sovers, O. J.
1991-01-01
A revision is presented of MASTERFIT-1987, which it supersedes. Changes during 1988 to 1991 included introduction of the octupole component of solid Earth tides, the NUVEL tectonic motion model, partial derivatives for the precession constant and source position rates, the option to correct for source structure, a refined model for antenna offsets, modeling the unique antenna at Richmond, FL, improved nutation series due to Zhu, Groten, and Reigber, and reintroduction of the old (Woolard) nutation series for simulation purposes. Text describing the relativistic transformations and gravitational contributions to the delay model was also revised in order to reflect the computer code more faithfully.
Detonation Product EOS Studies: Using ISLS to Refine Cheetah
NASA Astrophysics Data System (ADS)
Zaug, J. M.; Howard, W. M.; Fried, L. E.; Hansen, D. W.
2002-07-01
Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a simple fluid, methanol. Impulsive Stimulated Light Scattering (ISLS) conducted on diamond-anvil cell (DAC) encapsulated samples offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition the kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model Cheetah. Experimentally grounded computational models provide a good basis to confidently understand the chemical nature of reactions at extreme conditions.
Analysis of rotor vibratory loads using higher harmonic pitch control
NASA Technical Reports Server (NTRS)
Quackenbush, Todd R.; Bliss, Donald B.; Boschitsch, Alexander H.; Wachspress, Daniel A.
1992-01-01
Experimental studies of isolated rotors in forward flight have indicated that higher harmonic pitch control can reduce rotor noise. These tests also show that such pitch inputs can generate substantial vibratory loads. The modification is summarized of the RotorCRAFT (Computation of Rotor Aerodynamics in Forward flighT) analysis of isolated rotors to study the vibratory loading generated by high frequency pitch inputs. The original RotorCRAFT code was developed for use in the computation of such loading, and uses a highly refined rotor wake model to facilitate this task. The extended version of RotorCRAFT incorporates a variety of new features including: arbitrary periodic root pitch control; computation of blade stresses and hub loads; improved modeling of near wake unsteady effects; and preliminary implementation of a coupled prediction of rotor airloads and noise. Correlation studies are carried out with existing blade stress and vibratory hub load data to assess the performance of the extended code.
A Counterexample Guided Abstraction Refinement Framework for Verifying Concurrent C Programs
2005-05-24
source code are routinely executed. The source code is written in languages ranging from C/C++/Java to ML/ Ocaml . These languages differ not only in...from the difficulty to model computer programs—due to the complexity of programming languages as compared to hardware description languages —to...intermediate specification language lying between high-level Statechart- like formalisms and transition systems. Actions are encoded as changes in
High-Q Superconducting Coplanar Waveguide Resonators for Integration into Molecule Ion Traps
2010-05-01
V12C (3.13) 4 and We = V12 (3.14) 4 w 2 L’ finally yielding 2Wm R Q = wo m - w0L= woRC, (3.15) where wo = 1/ vLC is the resonant frequency of the...small. The primary challenge with simulating the microresonators was refining the mesh while remaining under memory limits of the modeling computer. It
Rough mill simulator version 3.0: an analysis tool for refining rough mill operations
Edward Thomas; Joel Weiss
2006-01-01
ROMI-3 is a rough mill computer simulation package designed to be used by both rip-first and chop-first rough mill operators and researchers. ROMI-3 allows users to model and examine the complex relationships among cutting bill, lumber grade mix, processing options, and their impact on rough mill yield and efficiency. Integrated into the ROMI-3 software is a new least-...
ISA-97 Compliant Architecture Testbed (ICAT) Projectry Organizations
1992-03-30
by the System Integracion Directorate of the USAISEC, August 29, 1992. The report discusses the refinement of the ISA-97 Compliant Architecture Model...browser and iconic representations of system objects and resources. When the user is interacting with an application which has multiple compo- nents, it is...computer communications, it is not uncommon for large information systems to be shared by users on multiple machines. The trend towards the desktop
Simulating Nonequilibrium Radiation via Orthogonal Polynomial Refinement
2015-01-07
measured by the preprocessing time, computer memory space, and average query time. In many search procedures for the number of points np of a data set, a...analytic expression for the radiative flux density is possible by the commonly accepted local thermal equilibrium ( LTE ) approximation. A semi...Vol. 227, pp. 9463-9476, 2008. 10. Galvez, M., Ray-Tracing model for radiation transport in three-dimensional LTE system, App. Physics, Vol. 38
Low Boom Configuration Analysis with FUN3D Adjoint Simulation Framework
NASA Technical Reports Server (NTRS)
Park, Michael A.
2011-01-01
Off-body pressure, forces, and moments for the Gulfstream Low Boom Model are computed with a Reynolds Averaged Navier Stokes solver coupled with the Spalart-Allmaras (SA) turbulence model. This is the first application of viscous output-based adaptation to reduce estimated discretization errors in off-body pressure for a wing body configuration. The output adaptation approach is compared to an a priori grid adaptation technique designed to resolve the signature on the centerline by stretching and aligning the grid to the freestream Mach angle. The output-based approach produced good predictions of centerline and off-centerline measurements. Eddy viscosity predicted by the SA turbulence model increased significantly with grid adaptation. Computed lift as a function of drag compares well with wind tunnel measurements for positive lift, but predicted lift, drag, and pitching moment as a function of angle of attack has significant differences from the measured data. The sensitivity of longitudinal forces and moment to grid refinement is much smaller than the differences between the computed and measured data.
Li, Jia; Xu, Zhenming; Zhou, Yaohe
2008-05-30
Traditionally, the mixture metals from waste printed circuit board (PCB) were sent to the smelt factory to refine pure copper. Some valuable metals (aluminum, zinc and tin) with low content in PCB were lost during smelt. A new method which used roll-type electrostatic separator (RES) to recovery low content metals in waste PCB was presented in this study. The theoretic model which was established from computing electric field and the analysis of forces on the particles was used to write a program by MATLAB language. The program was design to simulate the process of separating mixture metal particles. Electrical, material and mechanical factors were analyzed to optimize the operating parameters of separator. The experiment results of separating copper and aluminum particles by RES had a good agreement with computer simulation results. The model could be used to simulate separating other metal (tin, zinc, etc.) particles during the process of recycling waste PCBs by RES.
On configurational forces for gradient-enhanced inelasticity
NASA Astrophysics Data System (ADS)
Floros, Dimosthenis; Larsson, Fredrik; Runesson, Kenneth
2018-04-01
In this paper we discuss how configurational forces can be computed in an efficient and robust manner when a constitutive continuum model of gradient-enhanced viscoplasticity is adopted, whereby a suitably tailored mixed variational formulation in terms of displacements and micro-stresses is used. It is demonstrated that such a formulation produces sufficient regularity to overcome numerical difficulties that are notorious for a local constitutive model. In particular, no nodal smoothing of the internal variable fields is required. Moreover, the pathological mesh sensitivity that has been reported in the literature for a standard local model is no longer present. Numerical results in terms of configurational forces are shown for (1) a smooth interface and (2) a discrete edge crack. The corresponding configurational forces are computed for different values of the intrinsic length parameter. It is concluded that the convergence of the computed configurational forces with mesh refinement depends strongly on this parameter value. Moreover, the convergence behavior for the limit situation of rate-independent plasticity is unaffected by the relaxation time parameter.
An Excel sheet for inferring children's number-knower levels from give-N data.
Negen, James; Sarnecka, Barbara W; Lee, Michael D
2012-03-01
Number-knower levels are a series of stages of number concept development in early childhood. A child's number-knower level is typically assessed using the give-N task. Although the task procedure has been highly refined, the standard ways of analyzing give-N data remain somewhat crude. Lee and Sarnecka (Cogn Sci 34:51-67, 2010, in press) have developed a Bayesian model of children's performance on the give-N task that allows knower level to be inferred in a more principled way. However, this model requires considerable expertise and computational effort to implement and apply to data. Here, we present an approximation to the model's inference that can be computed with Microsoft Excel. We demonstrate the accuracy of the approximation and provide instructions for its use. This makes the powerful inferential capabilities of the Bayesian model accessible to developmental researchers interested in estimating knower levels from give-N data.
Hierarchical Boltzmann simulations and model error estimation
NASA Astrophysics Data System (ADS)
Torrilhon, Manuel; Sarna, Neeraj
2017-08-01
A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.
Evaluating vortex generator jet experiments for turbulent flow separation control
NASA Astrophysics Data System (ADS)
von Stillfried, F.; Kékesi, T.; Wallin, S.; Johansson, A. V.
2011-12-01
Separating turbulent boundary-layers can be energized by streamwise vortices from vortex generators (VG) that increase the near wall momentum as well as the overall mixing of the flow so that flow separation can be delayed or even prevented. In general, two different types of VGs exist: passive vane VGs (VVG) and active VG jets (VGJ). Even though VGs are already successfully used in engineering applications, it is still time-consuming and computationally expensive to include them in a numerical analysis. Fully resolved VGs in a computational mesh lead to a very high number of grid points and thus, computational costs. In addition, computational parameter studies for such flow control devices take much time to set-up. Therefore, much of the research work is still carried out experimentally. KTH Stockholm develops a novel VGJ model that makes it possible to only include the physical influence in terms of the additional stresses that originate from the VGJs without the need to locally refine the computational mesh. Such a modelling strategy enables fast VGJ parameter variations and optimization studies are easliy made possible. For that, VGJ experiments are evaluated in this contribution and results are used for developing a statistical VGJ model.
Technique for identifying, tracing, or tracking objects in image data
Anderson, Robert J [Albuquerque, NM; Rothganger, Fredrick [Albuquerque, NM
2012-08-28
A technique for computer vision uses a polygon contour to trace an object. The technique includes rendering a polygon contour superimposed over a first frame of image data. The polygon contour is iteratively refined to more accurately trace the object within the first frame after each iteration. The refinement includes computing image energies along lengths of contour lines of the polygon contour and adjusting positions of the contour lines based at least in part on the image energies.
Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling
NASA Astrophysics Data System (ADS)
Rastigejev, Y.
2011-12-01
Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems including numerical simulation of transpacific traveling pollution plumes. The generated pollution plumes are diluted due to turbulent mixing as they are advected downwind. Despite this dilution, it was recently discovered that pollution plumes in the remote troposphere can preserve their identity as well-defined structures for two weeks or more as they circle the globe. Present Global Chemical Transport Models (CTMs) implemented for quasi-uniform grids are completely incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. It is shown that WAMR algorithm solutions of comparable accuracy as conventional numerical techniques are obtained with more than an order of magnitude reduction in number of grid points, therefore the adaptive algorithm is capable to produce accurate results at a relatively low computational cost. The numerical simulations demonstrate that WAMR algorithm applied the traveling plume problem accurately reproduces the plume dynamics unlike conventional numerical methods that utilizes quasi-uniform numerical grids.
Analysis of lithology: Vegetation mixes in multispectral images
NASA Technical Reports Server (NTRS)
Adams, J. B.; Smith, M.; Adams, J. D.
1982-01-01
Discrimination and identification of lithologies from multispectral images is discussed. Rock/soil identification can be facilitated by removing the component of the signal in the images that is contributed by the vegetation. Mixing models were developed to predict the spectra of combinations of pure end members, and those models were refined using laboratory measurements of real mixtures. Models in use include a simple linear (checkerboard) mix, granular mixing, semi-transparent coatings, and combinations of the above. The use of interactive computer techniques that allow quick comparison of the spectrum of a pixel stack (in a multiband set) with laboratory spectra is discussed.
NASA Technical Reports Server (NTRS)
Browder, Joan A.; May, L. Nelson, Jr.; Rosenthal, Alan; Baumann, Robert H.; Gosselink, James G.
1987-01-01
A stochastic spatial computer model addressing coastal resource problems in Lousiana is being refined and validated using thematic mapper (TM) imagery. The TM images of brackish marsh sites were processed and data were tabulated on spatial parameters from TM images of the salt marsh sites. The Fisheries Image Processing Systems (FIPS) was used to analyze the TM scene. Activities were concentrated on improving the structure of the model and developing a structure and methodology for calibrating the model with spatial-pattern data from the TM imagery.
Gasdynamic model of turbulent combustion in an explosion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhl, A.L.; Ferguson, R.E.; Chien, K.Y.
1994-08-31
Proposed here is a gasdynamic model of turbulent combustion in explosions. It is used to investigate turbulent mixing aspects of afterburning found in TNT charges detonated in air. Evolution of the turbulent velocity field was calculated by a high-order Godunov solution of the gasdynamic equations. Adaptive Mesh Refinement (AMR) was used to follow convective-mixing processes on the computational grid. Combustion was then taken into account by a simplified sub-grid model, demonstrating that it was controlled by turbulent mixing. The rate of fuel consumption decayed inversely with time, and was shown to be insensitive to grid resolution.
Mehl, Steffen W.; Hill, Mary C.
2006-01-01
This report documents the addition of shared node Local Grid Refinement (LGR) to MODFLOW-2005, the U.S. Geological Survey modular, transient, three-dimensional, finite-difference ground-water flow model. LGR provides the capability to simulate ground-water flow using one block-shaped higher-resolution local grid (a child model) within a coarser-grid parent model. LGR accomplishes this by iteratively coupling two separate MODFLOW-2005 models such that heads and fluxes are balanced across the shared interfacing boundary. LGR can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined ground-water systems. Traditional one-way coupled telescopic mesh refinement (TMR) methods can have large, often undetected, inconsistencies in heads and fluxes across the interface between two model grids. The iteratively coupled shared-node method of LGR provides a more rigorous coupling in which the solution accuracy is controlled by convergence criteria defined by the user. In realistic problems, this can result in substantially more accurate solutions and require an increase in computer processing time. The rigorous coupling enables sensitivity analysis, parameter estimation, and uncertainty analysis that reflects conditions in both model grids. This report describes the method used by LGR, evaluates LGR accuracy and performance for two- and three-dimensional test cases, provides input instructions, and lists selected input and output files for an example problem. It also presents the Boundary Flow and Head (BFH) Package, which allows the child and parent models to be simulated independently using the boundary conditions obtained through the iterative process of LGR.
Supersonic Retro-Propulsion Experimental Design for Computational Fluid Dynamics Model Validation
NASA Technical Reports Server (NTRS)
Berry, Scott A.; Laws, Christopher T.; Kleb, W. L.; Rhode, Matthew N.; Spells, Courtney; McCrea, Andrew C.; Truble, Kerry A.; Schauerhamer, Daniel G.; Oberkampf, William L.
2011-01-01
The development of supersonic retro-propulsion, an enabling technology for heavy payload exploration missions to Mars, is the primary focus for the present paper. A new experimental model, intended to provide computational fluid dynamics model validation data, was recently designed for the Langley Research Center Unitary Plan Wind Tunnel Test Section 2. Pre-test computations were instrumental for sizing and refining the model, over the Mach number range of 2.4 to 4.6, such that tunnel blockage and internal flow separation issues would be minimized. A 5-in diameter 70-deg sphere-cone forebody, which accommodates up to four 4:1 area ratio nozzles, followed by a 10-in long cylindrical aftbody was developed for this study based on the computational results. The model was designed to allow for a large number of surface pressure measurements on the forebody and aftbody. Supplemental data included high-speed Schlieren video and internal pressures and temperatures. The run matrix was developed to allow for the quantification of various sources of experimental uncertainty, such as random errors due to run-to-run variations and bias errors due to flow field or model misalignments. Some preliminary results and observations from the test are presented, although detailed analyses of the data and uncertainties are still on going.
Lefor, Alan T
2011-08-01
Oncology research has traditionally been conducted using techniques from the biological sciences. The new field of computational oncology has forged a new relationship between the physical sciences and oncology to further advance research. By applying physics and mathematics to oncologic problems, new insights will emerge into the pathogenesis and treatment of malignancies. One major area of investigation in computational oncology centers around the acquisition and analysis of data, using improved computing hardware and software. Large databases of cellular pathways are being analyzed to understand the interrelationship among complex biological processes. Computer-aided detection is being applied to the analysis of routine imaging data including mammography and chest imaging to improve the accuracy and detection rate for population screening. The second major area of investigation uses computers to construct sophisticated mathematical models of individual cancer cells as well as larger systems using partial differential equations. These models are further refined with clinically available information to more accurately reflect living systems. One of the major obstacles in the partnership between physical scientists and the oncology community is communications. Standard ways to convey information must be developed. Future progress in computational oncology will depend on close collaboration between clinicians and investigators to further the understanding of cancer using these new approaches.
Shen, Rong; Han, Wei; Fiorin, Giacomo; Islam, Shahidul M; Schulten, Klaus; Roux, Benoît
2015-10-01
The knowledge of multiple conformational states is a prerequisite to understand the function of membrane transport proteins. Unfortunately, the determination of detailed atomic structures for all these functionally important conformational states with conventional high-resolution approaches is often difficult and unsuccessful. In some cases, biophysical and biochemical approaches can provide important complementary structural information that can be exploited with the help of advanced computational methods to derive structural models of specific conformational states. In particular, functional and spectroscopic measurements in combination with site-directed mutations constitute one important source of information to obtain these mixed-resolution structural models. A very common problem with this strategy, however, is the difficulty to simultaneously integrate all the information from multiple independent experiments involving different mutations or chemical labels to derive a unique structural model consistent with the data. To resolve this issue, a novel restrained molecular dynamics structural refinement method is developed to simultaneously incorporate multiple experimentally determined constraints (e.g., engineered metal bridges or spin-labels), each treated as an individual molecular fragment with all atomic details. The internal structure of each of the molecular fragments is treated realistically, while there is no interaction between different molecular fragments to avoid unphysical steric clashes. The information from all the molecular fragments is exploited simultaneously to constrain the backbone to refine a three-dimensional model of the conformational state of the protein. The method is illustrated by refining the structure of the voltage-sensing domain (VSD) of the Kv1.2 potassium channel in the resting state and by exploring the distance histograms between spin-labels attached to T4 lysozyme. The resulting VSD structures are in good agreement with the consensus model of the resting state VSD and the spin-spin distance histograms from ESR/DEER experiments on T4 lysozyme are accurately reproduced.
Ali, Syed Mashhood; Shamim, Shazia
2015-07-01
Complexation of racemic citalopram with β-cyclodextrin (β-CD) in aqueous medium was investigated to determine atom-accurate structure of the inclusion complexes. (1) H-NMR chemical shift change data of β-CD cavity protons in the presence of citalopram confirmed the formation of 1 : 1 inclusion complexes. ROESY spectrum confirmed the presence of aromatic ring in the β-CD cavity but whether one of the two or both rings was not clear. Molecular mechanics and molecular dynamic calculations showed the entry of fluoro-ring from wider side of β-CD cavity as the most favored mode of inclusion. Minimum energy computational models were analyzed for their accuracy in atomic coordinates by comparison of calculated and experimental intermolecular ROESY peak intensities, which were not found in agreement. Several least energy computational models were refined and analyzed till calculated and experimental intensities were compatible. The results demonstrate that computational models of CD complexes need to be analyzed for atom-accuracy and quantitative ROESY analysis is a promising method. Moreover, the study also validates that the quantitative use of ROESY is feasible even with longer mixing times if peak intensity ratios instead of absolute intensities are used. Copyright © 2015 John Wiley & Sons, Ltd.
Sakaguchi, Koichi; Lu, Jian; Leung, L. Ruby; ...
2016-10-22
Impacts of regional grid refinement on large-scale circulations (“upscale effects”) were detected in a previous study that used the Model for Prediction Across Scales-Atmosphere coupled to the physics parameterizations of the Community Atmosphere Model version 4. The strongest upscale effect was identified in the Southern Hemisphere jet during austral winter. This study examines the detailed underlying processes by comparing two simulations at quasi-uniform resolutions of 30 and 120 km to three variable-resolution simulations in which the horizontal grids are regionally refined to 30 km in North America, South America, or Asia from 120 km elsewhere. In all the variable-resolution simulations,more » precipitation increases in convective areas inside the high-resolution domains, as in the reference quasi-uniform high-resolution simulation. With grid refinement encompassing the tropical Americas, the increased condensational heating expands the local divergent circulations (Hadley cell) meridionally such that their descending branch is shifted poleward, which also pushes the baroclinically unstable regions, momentum flux convergence, and the eddy-driven jet poleward. This teleconnection pathway is not found in the reference high-resolution simulation due to a strong resolution sensitivity of cloud radiative forcing that dominates the aforementioned teleconnection signals. The regional refinement over Asia enhances Rossby wave sources and strengthens the upper level southerly flow, both facilitating the cross-equatorial propagation of stationary waves. Evidence indicates that this teleconnection pathway is also found in the reference high-resolution simulation. Lastly, the result underlines the intricate diagnoses needed to understand the upscale effects in global variable-resolution simulations, with implications for science investigations using the computationally efficient modeling framework.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakaguchi, Koichi; Lu, Jian; Leung, L. Ruby
Impacts of regional grid refinement on large-scale circulations (“upscale effects”) were detected in a previous study that used the Model for Prediction Across Scales-Atmosphere coupled to the physics parameterizations of the Community Atmosphere Model version 4. The strongest upscale effect was identified in the Southern Hemisphere jet during austral winter. This study examines the detailed underlying processes by comparing two simulations at quasi-uniform resolutions of 30 and 120 km to three variable-resolution simulations in which the horizontal grids are regionally refined to 30 km in North America, South America, or Asia from 120 km elsewhere. In all the variable-resolution simulations,more » precipitation increases in convective areas inside the high-resolution domains, as in the reference quasi-uniform high-resolution simulation. With grid refinement encompassing the tropical Americas, the increased condensational heating expands the local divergent circulations (Hadley cell) meridionally such that their descending branch is shifted poleward, which also pushes the baroclinically unstable regions, momentum flux convergence, and the eddy-driven jet poleward. This teleconnection pathway is not found in the reference high-resolution simulation due to a strong resolution sensitivity of cloud radiative forcing that dominates the aforementioned teleconnection signals. The regional refinement over Asia enhances Rossby wave sources and strengthens the upper level southerly flow, both facilitating the cross-equatorial propagation of stationary waves. Evidence indicates that this teleconnection pathway is also found in the reference high-resolution simulation. Lastly, the result underlines the intricate diagnoses needed to understand the upscale effects in global variable-resolution simulations, with implications for science investigations using the computationally efficient modeling framework.« less
Real-space refinement in PHENIX for cryo-EM and crystallography
Afonine, Pavel V.; Poon, Billy K.; Read, Randy J.; ...
2018-06-01
This work describes the implementation of real-space refinement in the phenix.real_space_refine program from the PHENIX suite. The use of a simplified refinement target function enables very fast calculation, which in turn makes it possible to identify optimal data-restraint weights as part of routine refinements with little runtime cost. Refinement of atomic models against low-resolution data benefits from the inclusion of as much additional information as is available. In addition to standard restraints on covalent geometry, phenix.real_space_refine makes use of extra information such as secondary-structure and rotamer-specific restraints, as well as restraints or constraints on internal molecular symmetry. The re-refinement ofmore » 385 cryo-EM-derived models available in the Protein Data Bank at resolutions of 6 Å or better shows significant improvement of the models and of the fit of these models to the target maps.« less
Real-space refinement in PHENIX for cryo-EM and crystallography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V.; Poon, Billy K.; Read, Randy J.
This work describes the implementation of real-space refinement in the phenix.real_space_refine program from the PHENIX suite. The use of a simplified refinement target function enables very fast calculation, which in turn makes it possible to identify optimal data-restraint weights as part of routine refinements with little runtime cost. Refinement of atomic models against low-resolution data benefits from the inclusion of as much additional information as is available. In addition to standard restraints on covalent geometry, phenix.real_space_refine makes use of extra information such as secondary-structure and rotamer-specific restraints, as well as restraints or constraints on internal molecular symmetry. The re-refinement ofmore » 385 cryo-EM-derived models available in the Protein Data Bank at resolutions of 6 Å or better shows significant improvement of the models and of the fit of these models to the target maps.« less
NASA Technical Reports Server (NTRS)
Vemaganti, Gururaja R.
1994-01-01
This report presents computations for the Type 4 shock-shock interference flow under laminar and turbulent conditions using unstructured grids. Mesh adaptation was accomplished by remeshing, refinement, and mesh movement. Two two-equation turbulence models were used to analyze turbulent flows. The mean flow governing equations and the turbulence governing equations are solved in a coupled manner. The solution algorithm and the details pertaining to its implementation on unstructured grids are described. Computations were performed at two different freestream Reynolds numbers at a freestream Mach number of 11. Effects of the variation in the impinging shock location are studied. The comparison of the results in terms of wall heat flux and wall pressure distributions is presented.
Moghadasi, Mohammad; Kozakov, Dima; Mamonov, Artem B.; Vakili, Pirooz; Vajda, Sandor; Paschalidis, Ioannis Ch.
2013-01-01
We introduce a message-passing algorithm to solve the Side Chain Positioning (SCP) problem. SCP is a crucial component of protein docking refinement, which is a key step of an important class of problems in computational structural biology called protein docking. We model SCP as a combinatorial optimization problem and formulate it as a Maximum Weighted Independent Set (MWIS) problem. We then employ a modified and convergent belief-propagation algorithm to solve a relaxation of MWIS and develop randomized estimation heuristics that use the relaxed solution to obtain an effective MWIS feasible solution. Using a benchmark set of protein complexes we demonstrate that our approach leads to more accurate docking predictions compared to a baseline algorithm that does not solve the SCP. PMID:23515575
Understanding Scientific Ideas: An Honors Course.
ERIC Educational Resources Information Center
Capps, Joan; Schueler, Paul
At Raritan Valley Community College (RVCC) in New Jersey, an honors philosophy course was developed which taught mathematics and science concepts independent of computational skill. The course required that students complete a weekly writing assignment designed as a continuous refinement of logical reasoning development. This refinement was…
Mesh quality control for multiply-refined tetrahedral grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger
1994-01-01
A new algorithm for controlling the quality of multiply-refined tetrahedral meshes is presented in this paper. The basic dynamic mesh adaption procedure allows localized grid refinement and coarsening to efficiently capture aerodynamic flow features in computational fluid dynamics problems; however, repeated application of the procedure may significantly deteriorate the quality of the mesh. Results presented show the effectiveness of this mesh quality algorithm and its potential in the area of helicopter aerodynamics and acoustics.
Robust stereo matching with trinary cross color census and triple image-based refinements
NASA Astrophysics Data System (ADS)
Chang, Ting-An; Lu, Xiao; Yang, Jar-Ferr
2017-12-01
For future 3D TV broadcasting systems and navigation applications, it is necessary to have accurate stereo matching which could precisely estimate depth map from two distanced cameras. In this paper, we first suggest a trinary cross color (TCC) census transform, which can help to achieve accurate disparity raw matching cost with low computational cost. The two-pass cost aggregation (TPCA) is formed to compute the aggregation cost, then the disparity map can be obtained by a range winner-take-all (RWTA) process and a white hole filling procedure. To further enhance the accuracy performance, a range left-right checking (RLRC) method is proposed to classify the results as correct, mismatched, or occluded pixels. Then, the image-based refinements for the mismatched and occluded pixels are proposed to refine the classified errors. Finally, the image-based cross voting and a median filter are employed to complete the fine depth estimation. Experimental results show that the proposed semi-global stereo matching system achieves considerably accurate disparity maps with reasonable computation cost.
Deformable complex network for refining low-resolution X-ray structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chong; Wang, Qinghua; Ma, Jianpeng, E-mail: jpma@bcm.edu
2015-10-27
A new refinement algorithm called the deformable complex network that combines a novel angular network-based restraint with a deformable elastic network model in the target function has been developed to aid in structural refinement in macromolecular X-ray crystallography. In macromolecular X-ray crystallography, building more accurate atomic models based on lower resolution experimental diffraction data remains a great challenge. Previous studies have used a deformable elastic network (DEN) model to aid in low-resolution structural refinement. In this study, the development of a new refinement algorithm called the deformable complex network (DCN) is reported that combines a novel angular network-based restraint withmore » the DEN model in the target function. Testing of DCN on a wide range of low-resolution structures demonstrated that it constantly leads to significantly improved structural models as judged by multiple refinement criteria, thus representing a new effective refinement tool for low-resolution structural determination.« less
NASA Technical Reports Server (NTRS)
Gullbrand, Jessica
2003-01-01
In this paper, turbulence-closure models are evaluated using the 'true' LES approach in turbulent channel flow. The study is an extension of the work presented by Gullbrand (2001), where fourth-order commutative filter functions are applied in three dimensions in a fourth-order finite-difference code. The true LES solution is the grid-independent solution to the filtered governing equations. The solution is obtained by keeping the filter width constant while the computational grid is refined. As the grid is refined, the solution converges towards the true LES solution. The true LES solution will depend on the filter width used, but will be independent of the grid resolution. In traditional LES, because the filter is implicit and directly connected to the grid spacing, the solution converges towards a direct numerical simulation (DNS) as the grid is refined, and not towards the solution of the filtered Navier-Stokes equations. The effect of turbulence-closure models is therefore difficult to determine in traditional LES because, as the grid is refined, more turbulence length scales are resolved and less influence from the models is expected. In contrast, in the true LES formulation, the explicit filter eliminates all scales that are smaller than the filter cutoff, regardless of the grid resolution. This ensures that the resolved length-scales do not vary as the grid resolution is changed. In true LES, the cell size must be smaller than or equal to the cutoff length scale of the filter function. The turbulence-closure models investigated are the dynamic Smagorinsky model (DSM), the dynamic mixed model (DMM), and the dynamic reconstruction model (DRM). These turbulence models were previously studied using two-dimensional explicit filtering in turbulent channel flow by Gullbrand & Chow (2002). The DSM by Germano et al. (1991) is used as the USFS model in all the simulations. This enables evaluation of different reconstruction models for the RSFS stresses. The DMM consists of the scale-similarity model (SSM) by Bardina et al. (1983), which is an RSFS model, in linear combination with the DSM. In the DRM, the RSFS stresses are modeled by using an estimate of the unfiltered velocity in the unclosed term, while the USFS stresses are modeled by the DSM. The DSM and the DMM are two commonly used turbulence-closure models, while the DRM is a more recent model.
Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model
NASA Astrophysics Data System (ADS)
O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.
2015-12-01
Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.
From deep TLS validation to ensembles of atomic models built from elemental motions
Urzhumtsev, Alexandre; Afonine, Pavel V.; Van Benschoten, Andrew H.; ...
2015-07-28
The translation–libration–screw model first introduced by Cruickshank, Schomaker and Trueblood describes the concerted motions of atomic groups. Using TLS models can improve the agreement between calculated and experimental diffraction data. Because the T, L and S matrices describe a combination of atomic vibrations and librations, TLS models can also potentially shed light on molecular mechanisms involving correlated motions. However, this use of TLS models in mechanistic studies is hampered by the difficulties in translating the results of refinement into molecular movement or a structural ensemble. To convert the matrices into a constituent molecular movement, the matrix elements must satisfy severalmore » conditions. Refining the T, L and S matrix elements as independent parameters without taking these conditions into account may result in matrices that do not represent concerted molecular movements. Here, a mathematical framework and the computational tools to analyze TLS matrices, resulting in either explicit decomposition into descriptions of the underlying motions or a report of broken conditions, are described. The description of valid underlying motions can then be output as a structural ensemble. All methods are implemented as part of the PHENIX project.« less
A Computational Study of an Oscillating VR-12 Airfoil with a Gurney Flap
NASA Technical Reports Server (NTRS)
Rhee, Myung
2004-01-01
Computations of the flow over an oscillating airfoil with a Gurney-flap are performed using a Reynolds Averaged Navier-Stokes code and compared with recent experimental data. The experimental results have been generated for different sizes of the Gurney flaps. The computations are focused mainly on a configuration. The baseline airfoil without a Gurney flap is computed and compared with the experiments in both steady and unsteady cases for the purpose of initial testing of the code performance. The are carried out with different turbulence models. Effects of the grid refinement are also examined and unsteady cases, in addition to the assessment of solver effects. The results of the comparisons of steady lift and drag computations indicate that the code is reasonably accurate in the attached flow of the steady condition but largely overpredicts the lift and underpredicts the drag in the higher angle steady flow.
On macromolecular refinement at subatomic resolution with interatomic scatterers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V., E-mail: pafonine@lbl.gov; Grosse-Kunstleve, Ralf W.; Adams, Paul D.
2007-11-01
Modelling deformation electron density using interatomic scatters is simpler than multipolar methods, produces comparable results at subatomic resolution and can easily be applied to macromolecules. A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented bymore » additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less
Natural flow and water consumption in the Milk River basin, Montana and Alberta, Canada
Thompson, R.E.
1986-01-01
A study was conducted to determine the differences between natural and nonnatural Milk River streamflow, to delineate and quantify the types and effects of water consumption on streamflow, and to refine the current computation procedure into one which computes and apportions natural flow. Water consumption consists principally of irrigated agriculture, municipal use, and evapotranspiration. Mean daily water consumption by irrigation ranged from 10 cu ft/sec to 26 cu ft/sec in the Canada part and from 6 cu ft/sec to 41 cu ft/sec in the US part. Two Canadian municipalities consume about 320 acre-ft and one US municipality consumes about 20 acre-ft yearly. Evaporation from the water surface comprises 80% 0 90% of the flow reduction in the Milk River attributed to total evapotranspiration. The current water-budget approach for computing natural flow of the Milk River where it reenters the US was refined into an interim procedure which includes allowances for man-induced consumption and a method for apportioning computed natural flow between the US and Canada. The refined procedure is considered interim because further study of flow routing, tributary inflow, and man-induced consumption is needed before a more accurate procedure for computing natural flow can be developed. (Author 's abstract)
Brunger, Axel T; Das, Debanu; Deacon, Ashley M; Grant, Joanna; Terwilliger, Thomas C; Read, Randy J; Adams, Paul D; Levitt, Michael; Schröder, Gunnar F
2012-04-01
Phasing by molecular replacement remains difficult for targets that are far from the search model or in situations where the crystal diffracts only weakly or to low resolution. Here, the process of determining and refining the structure of Cgl1109, a putative succinyl-diaminopimelate desuccinylase from Corynebacterium glutamicum, at ∼3 Å resolution is described using a combination of homology modeling with MODELLER, molecular-replacement phasing with Phaser, deformable elastic network (DEN) refinement and automated model building using AutoBuild in a semi-automated fashion, followed by final refinement cycles with phenix.refine and Coot. This difficult molecular-replacement case illustrates the power of including DEN restraints derived from a starting model to guide the movements of the model during refinement. The resulting improved model phases provide better starting points for automated model building and produce more significant difference peaks in anomalous difference Fourier maps to locate anomalous scatterers than does standard refinement. This example also illustrates a current limitation of automated procedures that require manual adjustment of local sequence misalignments between the homology model and the target sequence.
Brunger, Axel T.; Das, Debanu; Deacon, Ashley M.; Grant, Joanna; Terwilliger, Thomas C.; Read, Randy J.; Adams, Paul D.; Levitt, Michael; Schröder, Gunnar F.
2012-01-01
Phasing by molecular replacement remains difficult for targets that are far from the search model or in situations where the crystal diffracts only weakly or to low resolution. Here, the process of determining and refining the structure of Cgl1109, a putative succinyl-diaminopimelate desuccinylase from Corynebacterium glutamicum, at ∼3 Å resolution is described using a combination of homology modeling with MODELLER, molecular-replacement phasing with Phaser, deformable elastic network (DEN) refinement and automated model building using AutoBuild in a semi-automated fashion, followed by final refinement cycles with phenix.refine and Coot. This difficult molecular-replacement case illustrates the power of including DEN restraints derived from a starting model to guide the movements of the model during refinement. The resulting improved model phases provide better starting points for automated model building and produce more significant difference peaks in anomalous difference Fourier maps to locate anomalous scatterers than does standard refinement. This example also illustrates a current limitation of automated procedures that require manual adjustment of local sequence misalignments between the homology model and the target sequence. PMID:22505259
40 CFR 80.1667 - Attest engagement requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... subject to national standards and Small Refiner and Small Volume Refinery Status. (1) If the refiner... a written representation from the company representative stating the refinery produces gasoline from crude oil. (2) Obtain the annual average sulfur level from paragraph (b)(3) of this section. (3) Compute...
Sensory Information Processing and Symbolic Computation
1973-12-31
plague all image deblurring methods when working with high signal to noise ratios, is that of a ringing or ghost image phenomenon which surrounds high...Figure 11 The Impulse Response of an All-Pass Random Phase Filter 24 Figure 12 (a) Unsmoothed Log Spectra of the Sentence "The pipe began to...of automatic deblurring of images, linear predictive coding of speech and the refinement and application of mathematical models of human vision and
Testing MODFLOW-LGR for simulating flow around buried Quaternary valleys - synthetic test cases
NASA Astrophysics Data System (ADS)
Vilhelmsen, T. N.; Christensen, S.
2009-12-01
In this study the Local Grid Refinement (LGR) method developed for MODFLOW-2005 (Mehl and Hill, 2005) is utilized to describe groundwater flow in areas containing buried Quaternary valley structures. The tests are conducted as comparative analysis between simulations run with a globally refined model, a locally refined model, and a globally coarse model, respectively. The models vary from simple one layer models to more complex ones with up to 25 model layers. The comparisons of accuracy are conducted within the locally refined area and focus on water budgets, simulated heads, and simulated particle traces. Simulations made with the globally refined model are used as reference (regarded as “true” values). As expected, for all test cases the application of local grid refinement resulted in more accurate results than when using the globally coarse model. A significant advantage of utilizing MODFLOW-LGR was that it allows increased numbers of model layers to better resolve complex geology within local areas. This resulted in more accurate simulations than when using either a globally coarse model grid or a locally refined model with lower geological resolution. Improved accuracy in the latter case could not be expected beforehand because difference in geological resolution between the coarse parent model and the refined child model contradicts the assumptions of the Darcy weighted interpolation used in MODFLOW-LGR. With respect to model runtimes, it was sometimes found that the runtime for the locally refined model is much longer than for the globally refined model. This was the case even when the closure criteria were relaxed compared to the globally refined model. These results are contradictory to those presented by Mehl and Hill (2005). Furthermore, in the complex cases it took some testing (model runs) to identify the closure criteria and the damping factor that secured convergence, accurate solutions, and reasonable runtimes. For our cases this is judged to be a serious disadvantage of applying MODFLOW-LGR. Another disadvantage in the studied cases was that the MODFLOW-LGR results proved to be somewhat dependent on the correction method used at the parent-child model interface. This indicates that when applying MODFLOW-LGR there is a need for thorough and case-specific considerations regarding choice of correction method. References: Mehl, S. and M. C. Hill (2005). "MODFLOW-2005, THE U.S. GEOLOGICAL SURVEY MODULAR GROUND-WATER MODEL - DOCUMENTATION OF SHARED NODE LOCAL GRID REFINEMENT (LGR) AND THE BOUNDARY FLOW AND HEAD (BFH) PACKAGE " U.S. Geological Survey Techniques and Methods 6-A12
Determining crystal structures through crowdsourcing and coursework
Horowitz, Scott; Koepnick, Brian; Martin, Raoul; Tymieniecki, Agnes; Winburn, Amanda A.; Cooper, Seth; Flatten, Jeff; Rogawski, David S.; Koropatkin, Nicole M.; Hailu, Tsinatkeab T.; Jain, Neha; Koldewey, Philipp; Ahlstrom, Logan S.; Chapman, Matthew R.; Sikkema, Andrew P.; Skiba, Meredith A.; Maloney, Finn P.; Beinlich, Felix R. M.; Caglar, Ahmet; Coral, Alan; Jensen, Alice Elizabeth; Lubow, Allen; Boitano, Amanda; Lisle, Amy Elizabeth; Maxwell, Andrew T.; Failer, Barb; Kaszubowski, Bartosz; Hrytsiv, Bohdan; Vincenzo, Brancaccio; de Melo Cruz, Breno Renan; McManus, Brian Joseph; Kestemont, Bruno; Vardeman, Carl; Comisky, Casey; Neilson, Catherine; Landers, Catherine R.; Ince, Christopher; Buske, Daniel Jon; Totonjian, Daniel; Copeland, David Marshall; Murray, David; Jagieła, Dawid; Janz, Dietmar; Wheeler, Douglas C.; Cali, Elie; Croze, Emmanuel; Rezae, Farah; Martin, Floyd Orville; Beecher, Gil; de Jong, Guido Alexander; Ykman, Guy; Feldmann, Harald; Chan, Hugo Paul Perez; Kovanecz, Istvan; Vasilchenko, Ivan; Connellan, James C.; Borman, Jami Lynne; Norrgard, Jane; Kanfer, Jebbie; Canfield, Jeffrey M.; Slone, Jesse David; Oh, Jimmy; Mitchell, Joanne; Bishop, John; Kroeger, John Douglas; Schinkler, Jonas; McLaughlin, Joseph; Brownlee, June M.; Bell, Justin; Fellbaum, Karl Willem; Harper, Kathleen; Abbey, Kirk J.; Isaksson, Lennart E.; Wei, Linda; Cummins, Lisa N.; Miller, Lori Anne; Bain, Lyn; Carpenter, Lynn; Desnouck, Maarten; Sharma, Manasa G.; Belcastro, Marcus; Szew, Martin; Szew, Martin; Britton, Matthew; Gaebel, Matthias; Power, Max; Cassidy, Michael; Pfützenreuter, Michael; Minett, Michele; Wesselingh, Michiel; Yi, Minjune; Cameron, Neil Haydn Tormey; Bolibruch, Nicholas I.; Benevides, Noah; Kathleen Kerr, Norah; Barlow, Nova; Crevits, Nykole Krystyne; Dunn, Paul; Roque, Paulo Sergio Silveira Belo Nascimento; Riber, Peter; Pikkanen, Petri; Shehzad, Raafay; Viosca, Randy; James Fraser, Robert; Leduc, Robert; Madala, Roman; Shnider, Scott; de Boisblanc, Sharon; Butkovich, Slava; Bliven, Spencer; Hettler, Stephen; Telehany, Stephen; Schwegmann, Steven A.; Parkes, Steven; Kleinfelter, Susan C.; Michael Holst, Sven; van der Laan, T. J. A.; Bausewein, Thomas; Simon, Vera; Pulley, Warwick; Hull, William; Kim, Annes Yukyung; Lawton, Alexis; Ruesch, Amanda; Sundar, Anjali; Lawrence, Anna-Lisa; Afrin, Antara; Maheshwer, Bhargavi; Turfe, Bilal; Huebner, Christian; Killeen, Courtney Elizabeth; Antebi-Lerrman, Dalia; Luan, Danny; Wolfe, Derek; Pham, Duc; Michewicz, Elaina; Hull, Elizabeth; Pardington, Emily; Galal, Galal Osama; Sun, Grace; Chen, Grace; Anderson, Halie E.; Chang, Jane; Hewlett, Jeffrey Thomas; Sterbenz, Jennifer; Lim, Jiho; Morof, Joshua; Lee, Junho; Inn, Juyoung Samuel; Hahm, Kaitlin; Roth, Kaitlin; Nair, Karun; Markin, Katherine; Schramm, Katie; Toni Eid, Kevin; Gam, Kristina; Murphy, Lisha; Yuan, Lucy; Kana, Lulia; Daboul, Lynn; Shammas, Mario Karam; Chason, Max; Sinan, Moaz; Andrew Tooley, Nicholas; Korakavi, Nisha; Comer, Patrick; Magur, Pragya; Savliwala, Quresh; Davison, Reid Michael; Sankaran, Roshun Rajiv; Lewe, Sam; Tamkus, Saule; Chen, Shirley; Harvey, Sho; Hwang, Sin Ye; Vatsia, Sohrab; Withrow, Stefan; Luther, Tahra K; Manett, Taylor; Johnson, Thomas James; Ryan Brash, Timothy; Kuhlman, Wyatt; Park, Yeonjung; Popović, Zoran; Baker, David; Khatib, Firas; Bardwell, James C. A.
2016-01-01
We show here that computer game players can build high-quality crystal structures. Introduction of a new feature into the computer game Foldit allows players to build and real-space refine structures into electron density maps. To assess the usefulness of this feature, we held a crystallographic model-building competition between trained crystallographers, undergraduate students, Foldit players and automatic model-building algorithms. After removal of disordered residues, a team of Foldit players achieved the most accurate structure. Analysing the target protein of the competition, YPL067C, uncovered a new family of histidine triad proteins apparently involved in the prevention of amyloid toxicity. From this study, we conclude that crystallographers can utilize crowdsourcing to interpret electron density information and to produce structure solutions of the highest quality. PMID:27633552
NASA Astrophysics Data System (ADS)
Erdt, Marius; Sakas, Georgios
2010-03-01
This work presents a novel approach for model based segmentation of the kidney in images acquired by Computed Tomography (CT). The developed computer aided segmentation system is expected to support computer aided diagnosis and operation planning. We have developed a deformable model based approach based on local shape constraints that prevents the model from deforming into neighboring structures while allowing the global shape to adapt freely to the data. Those local constraints are derived from the anatomical structure of the kidney and the presence and appearance of neighboring organs. The adaptation process is guided by a rule-based deformation logic in order to improve the robustness of the segmentation in areas of diffuse organ boundaries. Our work flow consists of two steps: 1.) a user guided positioning and 2.) an automatic model adaptation using affine and free form deformation in order to robustly extract the kidney. In cases which show pronounced pathologies, the system also offers real time mesh editing tools for a quick refinement of the segmentation result. Evaluation results based on 30 clinical cases using CT data sets show an average dice correlation coefficient of 93% compared to the ground truth. The results are therefore in most cases comparable to manual delineation. Computation times of the automatic adaptation step are lower than 6 seconds which makes the proposed system suitable for an application in clinical practice.
Reconfigurable Model Execution in the OpenMDAO Framework
NASA Technical Reports Server (NTRS)
Hwang, John T.
2017-01-01
NASA's OpenMDAO framework facilitates constructing complex models and computing their derivatives for multidisciplinary design optimization. Decomposing a model into components that follow a prescribed interface enables OpenMDAO to assemble multidisciplinary derivatives from the component derivatives using what amounts to the adjoint method, direct method, chain rule, global sensitivity equations, or any combination thereof, using the MAUD architecture. OpenMDAO also handles the distribution of processors among the disciplines by hierarchically grouping the components, and it automates the data transfer between components that are on different processors. These features have made OpenMDAO useful for applications in aircraft design, satellite design, wind turbine design, and aircraft engine design, among others. This paper presents new algorithms for OpenMDAO that enable reconfigurable model execution. This concept refers to dynamically changing, during execution, one or more of: the variable sizes, solution algorithm, parallel load balancing, or set of variables-i.e., adding and removing components, perhaps to switch to a higher-fidelity sub-model. Any component can reconfigure at any point, even when running in parallel with other components, and the reconfiguration algorithm presented here performs the synchronized updates to all other components that are affected. A reconfigurable software framework for multidisciplinary design optimization enables new adaptive solvers, adaptive parallelization, and new applications such as gradient-based optimization with overset flow solvers and adaptive mesh refinement. Benchmarking results demonstrate the time savings for reconfiguration compared to setting up the model again from scratch, which can be significant in large-scale problems. Additionally, the new reconfigurability feature is applied to a mission profile optimization problem for commercial aircraft where both the parametrization of the mission profile and the time discretization are adaptively refined, resulting in computational savings of roughly 10% and the elimination of oscillations in the optimized altitude profile.
Improving collaborative learning in online software engineering education
NASA Astrophysics Data System (ADS)
Neill, Colin J.; DeFranco, Joanna F.; Sangwan, Raghvinder S.
2017-11-01
Team projects are commonplace in software engineering education. They address a key educational objective, provide students critical experience relevant to their future careers, allow instructors to set problems of greater scale and complexity than could be tackled individually, and are a vehicle for socially constructed learning. While all student teams experience challenges, those in fully online programmes must also deal with remote working, asynchronous coordination, and computer-mediated communications all of which contribute to greater social distance between team members. We have developed a facilitation framework to aid team collaboration and have demonstrated its efficacy, in prior research, with respect to team performance and outcomes. Those studies indicated, however, that despite experiencing improved project outcomes, students working in effective software engineering teams did not experience significantly improved individual achievement. To address this deficiency we implemented theoretically grounded refinements to the collaboration model based upon peer-tutoring research. Our results indicate a modest, but statistically significant (p = .08), improvement in individual achievement using this refined model.
Jiang, Yang; Zhang, Haiyang; Feng, Wei; Tan, Tianwei
2015-12-28
Metal ions play an important role in the catalysis of metalloenzymes. To investigate metalloenzymes via molecular modeling, a set of accurate force field parameters for metal ions is highly imperative. To extend its application range and improve the performance, the dummy atom model of metal ions was refined through a simple parameter screening strategy using the Mg(2+) ion as an example. Using the AMBER ff03 force field with the TIP3P model, the refined model accurately reproduced the experimental geometric and thermodynamic properties of Mg(2+). Compared with point charge models and previous dummy atom models, the refined dummy atom model yields an enhanced performance for producing reliable ATP/GTP-Mg(2+)-protein conformations in three metalloenzyme systems with single or double metal centers. Similar to other unbounded models, the refined model failed to reproduce the Mg-Mg distance and favored a monodentate binding of carboxylate groups, and these drawbacks needed to be considered with care. The outperformance of the refined model is mainly attributed to the use of a revised (more accurate) experimental solvation free energy and a suitable free energy correction protocol. This work provides a parameter screening strategy that can be readily applied to refine the dummy atom models for metal ions.
A conservation and biophysics guided stochastic approach to refining docked multimeric proteins.
Akbal-Delibas, Bahar; Haspel, Nurit
2013-01-01
We introduce a protein docking refinement method that accepts complexes consisting of any number of monomeric units. The method uses a scoring function based on a tight coupling between evolutionary conservation, geometry and physico-chemical interactions. Understanding the role of protein complexes in the basic biology of organisms heavily relies on the detection of protein complexes and their structures. Different computational docking methods are developed for this purpose, however, these methods are often not accurate and their results need to be further refined to improve the geometry and the energy of the resulting complexes. Also, despite the fact that complexes in nature often have more than two monomers, most docking methods focus on dimers since the computational complexity increases exponentially due to the addition of monomeric units. Our results show that the refinement scheme can efficiently handle complexes with more than two monomers by biasing the results towards complexes with native interactions, filtering out false positive results. Our refined complexes have better IRMSDs with respect to the known complexes and lower energies than those initial docked structures. Evolutionary conservation information allows us to bias our results towards possible functional interfaces, and the probabilistic selection scheme helps us to escape local energy minima. We aim to incorporate our refinement method in a larger framework which also enables docking of multimeric complexes given only monomeric structures.
Reconstruction of genome-scale human metabolic models using omics data.
Ryu, Jae Yong; Kim, Hyun Uk; Lee, Sang Yup
2015-08-01
The impact of genome-scale human metabolic models on human systems biology and medical sciences is becoming greater, thanks to increasing volumes of model building platforms and publicly available omics data. The genome-scale human metabolic models started with Recon 1 in 2007, and have since been used to describe metabolic phenotypes of healthy and diseased human tissues and cells, and to predict therapeutic targets. Here we review recent trends in genome-scale human metabolic modeling, including various generic and tissue/cell type-specific human metabolic models developed to date, and methods, databases and platforms used to construct them. For generic human metabolic models, we pay attention to Recon 2 and HMR 2.0 with emphasis on data sources used to construct them. Draft and high-quality tissue/cell type-specific human metabolic models have been generated using these generic human metabolic models. Integration of tissue/cell type-specific omics data with the generic human metabolic models is the key step, and we discuss omics data and their integration methods to achieve this task. The initial version of the tissue/cell type-specific human metabolic models can further be computationally refined through gap filling, reaction directionality assignment and the subcellular localization of metabolic reactions. We review relevant tools for this model refinement procedure as well. Finally, we suggest the direction of further studies on reconstructing an improved human metabolic model.
Using Small-Step Refinement for Algorithm Verification in Computer Science Education
ERIC Educational Resources Information Center
Simic, Danijela
2015-01-01
Stepwise program refinement techniques can be used to simplify program verification. Programs are better understood since their main properties are clearly stated, and verification of rather complex algorithms is reduced to proving simple statements connecting successive program specifications. Additionally, it is easy to analyse similar…
A finite element method to correct deformable image registration errors in low-contrast regions
NASA Astrophysics Data System (ADS)
Zhong, Hualiang; Kim, Jinkoo; Li, Haisen; Nurushev, Teamour; Movsas, Benjamin; Chetty, Indrin J.
2012-06-01
Image-guided adaptive radiotherapy requires deformable image registration to map radiation dose back and forth between images. The purpose of this study is to develop a novel method to improve the accuracy of an intensity-based image registration algorithm in low-contrast regions. A computational framework has been developed in this study to improve the quality of the ‘demons’ registration. For each voxel in the registration's target image, the standard deviation of image intensity in a neighborhood of this voxel was calculated. A mask for high-contrast regions was generated based on their standard deviations. In the masked regions, a tetrahedral mesh was refined recursively so that a sufficient number of tetrahedral nodes in these regions can be selected as driving nodes. An elastic system driven by the displacements of the selected nodes was formulated using a finite element method (FEM) and implemented on the refined mesh. The displacements of these driving nodes were generated with the ‘demons’ algorithm. The solution of the system was derived using a conjugated gradient method, and interpolated to generate a displacement vector field for the registered images. The FEM correction method was compared with the ‘demons’ algorithm on the computed tomography (CT) images of lung and prostate patients. The performance of the FEM correction relating to the ‘demons’ registration was analyzed based on the physical property of their deformation maps, and quantitatively evaluated through a benchmark model developed specifically for this study. Compared to the benchmark model, the ‘demons’ registration has the maximum error of 1.2 cm, which can be corrected by the FEM to 0.4 cm, and the average error of the ‘demons’ registration is reduced from 0.17 to 0.11 cm. For the CT images of lung and prostate patients, the deformation maps generated by the ‘demons’ algorithm were found unrealistic at several places. In these places, the displacement differences between the ‘demons’ registrations and their FEM corrections were found in the range of 0.4 and 1.1 cm. The mesh refinement and FEM simulation were implemented in a single thread application which requires about 45 min of computation time on a 2.6 GHz computer. This study has demonstrated that the FEM can be integrated with intensity-based image registration algorithms to improve their registration accuracy, especially in low-contrast regions.
NASA Astrophysics Data System (ADS)
Koehl, Patrice; Orland, Henri; Delarue, Marc
2011-08-01
We present an extension of the self-consistent mean field theory for protein side-chain modeling in which solvation effects are included based on the Poisson-Boltzmann (PB) theory. In this approach, the protein is represented with multiple copies of its side chains. Each copy is assigned a weight that is refined iteratively based on the mean field energy generated by the rest of the protein, until self-consistency is reached. At each cycle, the variational free energy of the multi-copy system is computed; this free energy includes the internal energy of the protein that accounts for vdW and electrostatics interactions and a solvation free energy term that is computed using the PB equation. The method converges in only a few cycles and takes only minutes of central processing unit time on a commodity personal computer. The predicted conformation of each residue is then set to be its copy with the highest weight after convergence. We have tested this method on a database of hundred highly refined NMR structures to circumvent the problems of crystal packing inherent to x-ray structures. The use of the PB-derived solvation free energy significantly improves prediction accuracy for surface side chains. For example, the prediction accuracies for χ1 for surface cysteine, serine, and threonine residues improve from 68%, 35%, and 43% to 80%, 53%, and 57%, respectively. A comparison with other side-chain prediction algorithms demonstrates that our approach is consistently better in predicting the conformations of exposed side chains.
Variable-Complexity Multidisciplinary Optimization on Parallel Computers
NASA Technical Reports Server (NTRS)
Grossman, Bernard; Mason, William H.; Watson, Layne T.; Haftka, Raphael T.
1998-01-01
This report covers work conducted under grant NAG1-1562 for the NASA High Performance Computing and Communications Program (HPCCP) from December 7, 1993, to December 31, 1997. The objective of the research was to develop new multidisciplinary design optimization (MDO) techniques which exploit parallel computing to reduce the computational burden of aircraft MDO. The design of the High-Speed Civil Transport (HSCT) air-craft was selected as a test case to demonstrate the utility of our MDO methods. The three major tasks of this research grant included: development of parallel multipoint approximation methods for the aerodynamic design of the HSCT, use of parallel multipoint approximation methods for structural optimization of the HSCT, mathematical and algorithmic development including support in the integration of parallel computation for items (1) and (2). These tasks have been accomplished with the development of a response surface methodology that incorporates multi-fidelity models. For the aerodynamic design we were able to optimize with up to 20 design variables using hundreds of expensive Euler analyses together with thousands of inexpensive linear theory simulations. We have thereby demonstrated the application of CFD to a large aerodynamic design problem. For the predicting structural weight we were able to combine hundreds of structural optimizations of refined finite element models with thousands of optimizations based on coarse models. Computations have been carried out on the Intel Paragon with up to 128 nodes. The parallel computation allowed us to perform combined aerodynamic-structural optimization using state of the art models of a complex aircraft configurations.
Prospects for improving the representation of coastal and shelf seas in global ocean models
NASA Astrophysics Data System (ADS)
Holt, Jason; Hyder, Patrick; Ashworth, Mike; Harle, James; Hewitt, Helene T.; Liu, Hedong; New, Adrian L.; Pickles, Stephen; Porter, Andrew; Popova, Ekaterina; Icarus Allen, J.; Siddorn, John; Wood, Richard
2017-02-01
Accurately representing coastal and shelf seas in global ocean models represents one of the grand challenges of Earth system science. They are regions of immense societal importance through the goods and services they provide, hazards they pose and their role in global-scale processes and cycles, e.g. carbon fluxes and dense water formation. However, they are poorly represented in the current generation of global ocean models. In this contribution, we aim to briefly characterise the problem, and then to identify the important physical processes, and their scales, needed to address this issue in the context of the options available to resolve these scales globally and the evolving computational landscape.We find barotropic and topographic scales are well resolved by the current state-of-the-art model resolutions, e.g. nominal 1/12°, and still reasonably well resolved at 1/4°; here, the focus is on process representation. We identify tides, vertical coordinates, river inflows and mixing schemes as four areas where modelling approaches can readily be transferred from regional to global modelling with substantial benefit. In terms of finer-scale processes, we find that a 1/12° global model resolves the first baroclinic Rossby radius for only ˜ 8 % of regions < 500 m deep, but this increases to ˜ 70 % for a 1/72° model, so resolving scales globally requires substantially finer resolution than the current state of the art.We quantify the benefit of improved resolution and process representation using 1/12° global- and basin-scale northern North Atlantic nucleus for a European model of the ocean (NEMO) simulations; the latter includes tides and a k-ɛ vertical mixing scheme. These are compared with global stratification observations and 19 models from CMIP5. In terms of correlation and basin-wide rms error, the high-resolution models outperform all these CMIP5 models. The model with tides shows improved seasonal cycles compared to the high-resolution model without tides. The benefits of resolution are particularly apparent in eastern boundary upwelling zones.To explore the balance between the size of a globally refined model and that of multiscale modelling options (e.g. finite element, finite volume or a two-way nesting approach), we consider a simple scale analysis and a conceptual grid refining approach. We put this analysis in the context of evolving computer systems, discussing model turnaround time, scalability and resource costs. Using a simple cost model compared to a reference configuration (taken to be a 1/4° global model in 2011) and the increasing performance of the UK Research Councils' computer facility, we estimate an unstructured mesh multiscale approach, resolving process scales down to 1.5 km, would use a comparable share of the computer resource by 2021, the two-way nested multiscale approach by 2022, and a 1/72° global model by 2026. However, we also note that a 1/12° global model would not have a comparable computational cost to a 1° global model in 2017 until 2027. Hence, we conclude that for computationally expensive models (e.g. for oceanographic research or operational oceanography), resolving scales to ˜ 1.5 km would be routinely practical in about a decade given substantial effort on numerical and computational development. For complex Earth system models, this extends to about 2 decades, suggesting the focus here needs to be on improved process parameterisation to meet these challenges.
Calculated and measured fields in superferric wiggler magnets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blum, E.B.; Solomon, L.
1995-02-01
Although Klaus Halbach is widely known and appreciated as the originator of the computer program POISSON for electromagnetic field calculation, Klaus has always believed that analytical methods can give much more insight into the performance of a magnet than numerical simulation. Analytical approximations readily show how the different aspects of a magnet`s design such as pole dimensions, current, and coil configuration contribute to the performance. These methods yield accuracies of better than 10%. Analytical methods should therefore be used when conceptualizing a magnet design. Computer analysis can then be used for refinement. A simple model is presented for the peakmore » on-axis field of an electro-magnetic wiggler with iron poles and superconducting coils. The model is applied to the radiator section of the superconducting wiggler for the BNL Harmonic Generation Free Electron Laser. The predictions of the model are compared to the measured field and the results from POISSON.« less
A local time stepping algorithm for GPU-accelerated 2D shallow water models
NASA Astrophysics Data System (ADS)
Dazzi, Susanna; Vacondio, Renato; Dal Palù, Alessandro; Mignosa, Paolo
2018-01-01
In the simulation of flooding events, mesh refinement is often required to capture local bathymetric features and/or to detail areas of interest; however, if an explicit finite volume scheme is adopted, the presence of small cells in the domain can restrict the allowable time step due to the stability condition, thus reducing the computational efficiency. With the aim of overcoming this problem, the paper proposes the application of a Local Time Stepping (LTS) strategy to a GPU-accelerated 2D shallow water numerical model able to handle non-uniform structured meshes. The algorithm is specifically designed to exploit the computational capability of GPUs, minimizing the overheads associated with the LTS implementation. The results of theoretical and field-scale test cases show that the LTS model guarantees appreciable reductions in the execution time compared to the traditional Global Time Stepping strategy, without compromising the solution accuracy.
Development of structured ICD-10 and its application to computer-assisted ICD coding.
Imai, Takeshi; Kajino, Masayuki; Sato, Megumi; Ohe, Kazuhiko
2010-01-01
This paper presents: (1) a framework of formal representation of ICD10, which functions as a bridge between ontological information and natural language expressions; and (2) a methodology to use formally described ICD10 for computer-assisted ICD coding. First, we analyzed and structurized the meanings of categories in 15 chapters of ICD10. Then we expanded the structured ICD10 (S-ICD10) by adding subordinate concepts and labels derived from Japanese Standard Disease Names. The information model to describe formal representation was refined repeatedly. The resultant model includes 74 types of semantic links. We also developed an ICD coding module based on S-ICD10 and a 'Coding Principle,' which achieved high accuracy (>70%) for four chapters. These results not only demonstrate the basic feasibility of our coding framework but might also inform the development of the information model for formal description framework in the ICD11 revision.
Zhang, Yu; Prakash, Edmond C; Sung, Eric
2004-01-01
This paper presents a new physically-based 3D facial model based on anatomical knowledge which provides high fidelity for facial expression animation while optimizing the computation. Our facial model has a multilayer biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators, and underlying skull structure. In contrast to existing mass-spring-damper (MSD) facial models, our dynamic skin model uses the nonlinear springs to directly simulate the nonlinear visco-elastic behavior of soft tissue and a new kind of edge repulsion spring is developed to prevent collapse of the skin model. Different types of muscle models have been developed to simulate distribution of the muscle force applied on the skin due to muscle contraction. The presence of the skull advantageously constrain the skin movements, resulting in more accurate facial deformation and also guides the interactive placement of facial muscles. The governing dynamics are computed using a local semi-implicit ODE solver. In the dynamic simulation, an adaptive refinement automatically adapts the local resolution at which potential inaccuracies are detected depending on local deformation. The method, in effect, ensures the required speedup by concentrating computational time only where needed while ensuring realistic behavior within a predefined error threshold. This mechanism allows more pleasing animation results to be produced at a reduced computational cost.
Best Merge Region Growing Segmentation with Integrated Non-Adjacent Region Object Aggregation
NASA Technical Reports Server (NTRS)
Tilton, James C.; Tarabalka, Yuliya; Montesano, Paul M.; Gofman, Emanuel
2012-01-01
Best merge region growing normally produces segmentations with closed connected region objects. Recognizing that spectrally similar objects often appear in spatially separate locations, we present an approach for tightly integrating best merge region growing with non-adjacent region object aggregation, which we call Hierarchical Segmentation or HSeg. However, the original implementation of non-adjacent region object aggregation in HSeg required excessive computing time even for moderately sized images because of the required intercomparison of each region with all other regions. This problem was previously addressed by a recursive approximation of HSeg, called RHSeg. In this paper we introduce a refined implementation of non-adjacent region object aggregation in HSeg that reduces the computational requirements of HSeg without resorting to the recursive approximation. In this refinement, HSeg s region inter-comparisons among non-adjacent regions are limited to regions of a dynamically determined minimum size. We show that this refined version of HSeg can process moderately sized images in about the same amount of time as RHSeg incorporating the original HSeg. Nonetheless, RHSeg is still required for processing very large images due to its lower computer memory requirements and amenability to parallel processing. We then note a limitation of RHSeg with the original HSeg for high spatial resolution images, and show how incorporating the refined HSeg into RHSeg overcomes this limitation. The quality of the image segmentations produced by the refined HSeg is then compared with other available best merge segmentation approaches. Finally, we comment on the unique nature of the hierarchical segmentations produced by HSeg.
Modeling and docking antibody structures with Rosetta
Weitzner, Brian D.; Jeliazkov, Jeliazko R.; Lyskov, Sergey; Marze, Nicholas; Kuroda, Daisuke; Frick, Rahel; Adolf-Bryfogle, Jared; Biswas, Naireeta; Dunbrack, Roland L.; Gray, Jeffrey J.
2017-01-01
We describe Rosetta-based computational protocols for predicting the three-dimensional structure of an antibody from sequence (RosettaAntibody) and then docking the antibody to protein antigens (SnugDock). Antibody modeling leverages canonical loop conformations to graft large segments from experimentally-determined structures as well as (1) energetic calculations to minimize loops, (2) docking methodology to refine the VL–VH relative orientation, and (3) de novo prediction of the elusive complementarity determining region (CDR) H3 loop. To alleviate model uncertainty, antibody–antigen docking resamples CDR loop conformations and can use multiple models to represent an ensemble of conformations for the antibody, the antigen or both. These protocols can be run fully-automated via the ROSIE web server (http://rosie.rosettacommons.org/) or manually on a computer with user control of individual steps. For best results, the protocol requires roughly 1,000 CPU-hours for antibody modeling and 250 CPU-hours for antibody–antigen docking. Tasks can be completed in under a day by using public supercomputers. PMID:28125104
Evaluation of tropical channel refinement using MPAS-A aquaplanet simulations
Martini, Matus N.; Gustafson, Jr., William I.; O'Brien, Travis A.; ...
2015-09-13
Climate models with variable-resolution grids offer a computationally less expensive way to provide more detailed information at regional scales and increased accuracy for processes that cannot be resolved by a coarser grid. This study uses the Model for Prediction Across Scales–Atmosphere (MPAS22A), consisting of a nonhydrostatic dynamical core and a subset of Advanced Research Weather Research and Forecasting (ARW-WRF) model atmospheric physics that have been modified to include the Community Atmosphere Model version 5 (CAM5) cloud fraction parameterization, to investigate the potential benefits of using increased resolution in an tropical channel. The simulations are performed with an idealized aquaplanet configurationmore » using two quasi-uniform grids, with 30 km and 240 km grid spacing, and two variable-resolution grids spanning the same grid spacing range; one with a narrow (20°S–20°N) and one with a wide (30°S–30°N) tropical channel refinement. Results show that increasing resolution in the tropics impacts both the tropical and extratropical circulation. Compared to the quasi-uniform coarse grid, the narrow-channel simulation exhibits stronger updrafts in the Ferrel cell as well as in the middle of the upward branch of the Hadley cell. The wider tropical channel has a closer correspondence to the 30 km quasi-uniform simulation. However, the total atmospheric poleward energy transports are similar in all simulations. The largest differences are in the low-level cloudiness. The refined channel simulations show improved tropical and extratropical precipitation relative to the global 240 km simulation when compared to the global 30 km simulation. All simulations have a single ITCZ. Furthermore, the relatively small differences in mean global and tropical precipitation rates among the simulations are a promising result, and the evidence points to the tropical channel being an effective method for avoiding the extraneous numerical artifacts seen in earlier studies that only refined portion of the tropics.« less
Improving the accuracy of macromolecular structure refinement at 7 Å resolution.
Brunger, Axel T; Adams, Paul D; Fromme, Petra; Fromme, Raimund; Levitt, Michael; Schröder, Gunnar F
2012-06-06
In X-ray crystallography, molecular replacement and subsequent refinement is challenging at low resolution. We compared refinement methods using synchrotron diffraction data of photosystem I at 7.4 Å resolution, starting from different initial models with increasing deviations from the known high-resolution structure. Standard refinement spoiled the initial models, moving them further away from the true structure and leading to high R(free)-values. In contrast, DEN refinement improved even the most distant starting model as judged by R(free), atomic root-mean-square differences to the true structure, significance of features not included in the initial model, and connectivity of electron density. The best protocol was DEN refinement with initial segmented rigid-body refinement. For the most distant initial model, the fraction of atoms within 2 Å of the true structure improved from 24% to 60%. We also found a significant correlation between R(free) values and the accuracy of the model, suggesting that R(free) is useful even at low resolution. Copyright © 2012 Elsevier Ltd. All rights reserved.
Chodkiewicz, Michał L; Migacz, Szymon; Rudnicki, Witold; Makal, Anna; Kalinowski, Jarosław A; Moriarty, Nigel W; Grosse-Kunstleve, Ralf W; Afonine, Pavel V; Adams, Paul D; Dominiak, Paulina Maria
2018-02-01
It has been recently established that the accuracy of structural parameters from X-ray refinement of crystal structures can be improved by using a bank of aspherical pseudoatoms instead of the classical spherical model of atomic form factors. This comes, however, at the cost of increased complexity of the underlying calculations. In order to facilitate the adoption of this more advanced electron density model by the broader community of crystallographers, a new software implementation called DiSCaMB , 'densities in structural chemistry and molecular biology', has been developed. It addresses the challenge of providing for high performance on modern computing architectures. With parallelization options for both multi-core processors and graphics processing units (using CUDA), the library features calculation of X-ray scattering factors and their derivatives with respect to structural parameters, gives access to intermediate steps of the scattering factor calculations (thus allowing for experimentation with modifications of the underlying electron density model), and provides tools for basic structural crystallographic operations. Permissively (MIT) licensed, DiSCaMB is an open-source C++ library that can be embedded in both academic and commercial tools for X-ray structure refinement.
Spherical Harmonic Decomposition of Gravitational Waves Across Mesh Refinement Boundaries
NASA Technical Reports Server (NTRS)
Fiske, David R.; Baker, John; vanMeter, James R.; Centrella, Joan M.
2005-01-01
We evolve a linearized (Teukolsky) solution of the Einstein equations with a non-linear Einstein solver. Using this testbed, we are able to show that such gravitational waves, defined by the Weyl scalars in the Newman-Penrose formalism, propagate faithfully across mesh refinement boundaries, and use, for the first time to our knowledge, a novel algorithm due to Misner to compute spherical harmonic components of our waveforms. We show that the algorithm performs extremely well, even when the extraction sphere intersects refinement boundaries.
Refined numerical solution of the transonic flow past a wedge
NASA Technical Reports Server (NTRS)
Liang, S.-M.; Fung, K.-Y.
1985-01-01
A numerical procedure combining the ideas of solving a modified difference equation and of adaptive mesh refinement is introduced. The numerical solution on a fixed grid is improved by using better approximations of the truncation error computed from local subdomain grid refinements. This technique is used to obtain refined solutions of steady, inviscid, transonic flow past a wedge. The effects of truncation error on the pressure distribution, wave drag, sonic line, and shock position are investigated. By comparing the pressure drag on the wedge and wave drag due to the shocks, a supersonic-to-supersonic shock originating from the wedge shoulder is confirmed.
Designing effective animations for computer science instruction
NASA Astrophysics Data System (ADS)
Grillmeyer, Oliver
This study investigated the potential for animations of Scheme functions to help novice computer science students understand difficult programming concepts. These animations used an instructional framework inspired by theories of constructivism and knowledge integration. The framework had students make predictions, reflect, and specify examples to animate to promote autonomous learning and result in more integrated knowledge. The framework used animated pivotal cases to help integrate disconnected ideas and restructure students' incomplete ideas by illustrating weaknesses in their existing models. The animations scaffolded learners, making the thought processes of experts more visible by modeling complex and tacit information. The animation design was guided by prior research and a methodology of design and refinement. Analysis of pilot studies led to the development of four design concerns to aid animation designers: clearly illustrate the mapping between objects in animations with the actual objects they represent, show causal connections between elements, draw attention to the salient features of the modeled system, and create animations that reduce complexity. Refined animations based on these design concerns were compared to computer-based tools, text-based instruction, and simpler animations that do not embody the design concerns. Four studies comprised this dissertation work. Two sets of animated presentations of list creation functions were compared to control groups. No significant differences were found in support of animations. Three different animated models of traces of recursive functions ranging from concrete to abstract representations were compared. No differences in learning gains were found between the three models in test performance. Three models of animations of applicative operators were compared with students using the replacement modeler and the Scheme interpreter. Significant differences were found favoring animations that addressed causality and salience in their design. Lastly, two binary tree search algorithm animations designed to reduce complexity were compared with hand-tracing of calls. Students made fewer mistakes in predicting the tree traversal when guided by the animations. However, the posttest findings were inconsistent. In summary, animations designed based on the design concerns did not consistently add value to instruction in the form investigated in this research.
A Parametric Study of Unsteady Rotor-Stator Interaction in a Simplified Francis Turbine
NASA Astrophysics Data System (ADS)
Wouden, Alex; Cimbala, John; Lewis, Bryan
2011-11-01
CFD analysis is becoming a critical stage in the design of hydroturbines. However, its capability to represent unsteady flow interactions between the rotor and stator (which requires a 360-degree, mesh-refined model of the turbine passage) is hindered. For CFD to become a more effective tool in predicting the performance of a hydroturbine, the key interactions between the rotor and stator need to be understood using current numerical methods. As a first step towards evaluating this unsteady behavior without the burden of a computationally expensive domain, the stator and Francis-type rotor blades are reduced to flat plates. Local and global variables are compared using periodic, semi-periodic, and 360-degree geometric models and various turbulence models (k-omega, k-epsilon, and Spalart-Allmaras). The computations take place within the OpenFOAM® environment and utilize a general grid interface (GGI) between the rotor and stator computational domains. The rotor computational domain is capable of dynamic rotation. The results demonstrate some of the strengths and limitations of utilizing CFD for hydroturbine analysis. These case studies will also serve as tutorials to help others learn how to use CFD for turbomachinery. This research is funded by a grant from the DOE.
NASA Astrophysics Data System (ADS)
DaPonte, John S.; Sadowski, Thomas; Thomas, Paul
2006-05-01
This paper describes a collaborative project conducted by the Computer Science Department at Southern Connecticut State University and NASA's Goddard Institute for Space Science (GISS). Animations of output from a climate simulation math model used at GISS to predict rainfall and circulation have been produced for West Africa from June to September 2002. These early results have assisted scientists at GISS in evaluating the accuracy of the RM3 climate model when compared to similar results obtained from satellite imagery. The results presented below will be refined to better meet the needs of GISS scientists and will be expanded to cover other geographic regions for a variety of time frames.
Computer Models Simulate Fine Particle Dispersion
NASA Technical Reports Server (NTRS)
2010-01-01
Through a NASA Seed Fund partnership with DEM Solutions Inc., of Lebanon, New Hampshire, scientists at Kennedy Space Center refined existing software to study the electrostatic phenomena of granular and bulk materials as they apply to planetary surfaces. The software, EDEM, allows users to import particles and obtain accurate representations of their shapes for modeling purposes, such as simulating bulk solids behavior, and was enhanced to be able to more accurately model fine, abrasive, cohesive particles. These new EDEM capabilities can be applied in many industries unrelated to space exploration and have been adopted by several prominent U.S. companies, including John Deere, Pfizer, and Procter & Gamble.
Borsia, I.; Rossetto, R.; Schifani, C.; Hill, Mary C.
2013-01-01
In this paper two modifications to the MODFLOW code are presented. One concerns an extension of Local Grid Refinement (LGR) to Variable Saturated Flow process (VSF) capability. This modification allows the user to solve the 3D Richards’ equation only in selected parts of the model domain. The second modification introduces a new package, named CFL (Cascading Flow), which improves the computation of overland flow when ground surface saturation is simulated using either VSF or the Unsaturated Zone Flow (UZF) package. The modeling concepts are presented and demonstrated. Programmer documentation is included in appendices.
Multiphysics Simulations: Challenges and Opportunities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keyes, David; McInnes, Lois C.; Woodward, Carol
2013-02-12
We consider multiphysics applications from algorithmic and architectural perspectives, where ‘‘algorithmic’’ includes both mathematical analysis and computational complexity, and ‘‘architectural’’ includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not always practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose somemore » commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities.« less
NASA Astrophysics Data System (ADS)
Shang, J. S.; Andrienko, D. A.; Huang, P. G.; Surzhikov, S. T.
2014-06-01
An efficient computational capability for nonequilibrium radiation simulation via the ray tracing technique has been accomplished. The radiative rate equation is iteratively coupled with the aerodynamic conservation laws including nonequilibrium chemical and chemical-physical kinetic models. The spectral properties along tracing rays are determined by a space partition algorithm of the nearest neighbor search process, and the numerical accuracy is further enhanced by a local resolution refinement using the Gauss-Lobatto polynomial. The interdisciplinary governing equations are solved by an implicit delta formulation through the diminishing residual approach. The axisymmetric radiating flow fields over the reentry RAM-CII probe have been simulated and verified with flight data and previous solutions by traditional methods. A computational efficiency gain nearly forty times is realized over that of the existing simulation procedures.
Li, Bin; Chen, Kan; Tian, Lianfang; Yeboah, Yao; Ou, Shanxing
2013-01-01
The segmentation and detection of various types of nodules in a Computer-aided detection (CAD) system present various challenges, especially when (1) the nodule is connected to a vessel and they have very similar intensities; (2) the nodule with ground-glass opacity (GGO) characteristic possesses typical weak edges and intensity inhomogeneity, and hence it is difficult to define the boundaries. Traditional segmentation methods may cause problems of boundary leakage and "weak" local minima. This paper deals with the above mentioned problems. An improved detection method which combines a fuzzy integrated active contour model (FIACM)-based segmentation method, a segmentation refinement method based on Parametric Mixture Model (PMM) of juxta-vascular nodules, and a knowledge-based C-SVM (Cost-sensitive Support Vector Machines) classifier, is proposed for detecting various types of pulmonary nodules in computerized tomography (CT) images. Our approach has several novel aspects: (1) In the proposed FIACM model, edge and local region information is incorporated. The fuzzy energy is used as the motivation power for the evolution of the active contour. (2) A hybrid PMM Model of juxta-vascular nodules combining appearance and geometric information is constructed for segmentation refinement of juxta-vascular nodules. Experimental results of detection for pulmonary nodules show desirable performances of the proposed method.
Use of Dynamic Models and Operational Architecture to Solve Complex Navy Challenges
NASA Technical Reports Server (NTRS)
Grande, Darby; Black, J. Todd; Freeman, Jared; Sorber, TIm; Serfaty, Daniel
2010-01-01
The United States Navy established 8 Maritime Operations Centers (MOC) to enhance the command and control of forces at the operational level of warfare. Each MOC is a headquarters manned by qualified joint operational-level staffs, and enabled by globally interoperable C41 systems. To assess and refine MOC staffing, equipment, and schedules, a dynamic software model was developed. The model leverages pre-existing operational process architecture, joint military task lists that define activities and their precedence relations, as well as Navy documents that specify manning and roles per activity. The software model serves as a "computational wind-tunnel" in which to test a MOC on a mission, and to refine its structure, staffing, processes, and schedules. More generally, the model supports resource allocation decisions concerning Doctrine, Organization, Training, Material, Leadership, Personnel and Facilities (DOTMLPF) at MOCs around the world. A rapid prototype effort efficiently produced this software in less than five months, using an integrated process team consisting of MOC military and civilian staff, modeling experts, and software developers. The work reported here was conducted for Commander, United States Fleet Forces Command in Norfolk, Virginia, code N5-0LW (Operational Level of War) that facilitates the identification, consolidation, and prioritization of MOC capabilities requirements, and implementation and delivery of MOC solutions.
Use of Failure in IS Development Statistics: Lessons for IS Curriculum Design
ERIC Educational Resources Information Center
Longenecker, Herbert H., Jr.; Babb, Jeffry; Waguespack, Leslie; Tastle, William; Landry, Jeff
2016-01-01
The evolution of computing education reflects the history of the professional practice of computing. Keeping computing education current has been a major challenge due to the explosive advances in technologies. Academic programs in Information Systems, a long-standing computing discipline, develop and refine the theory and practice of computing…
Model documentation report: Transportation sector model of the National Energy Modeling System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-03-01
This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model. This document serves three purposes. First, it is a reference document providing a detailed description of TRAN for model analysts, users, and the public. Second, this report meets the legal requirements of the Energy Information Administration (EIA) to provide adequate documentation in support of its statistical and forecast reports (Public Law 93-275, 57(b)(1)). Third, it permits continuity inmore » model development by providing documentation from which energy analysts can undertake model enhancements, data updates, and parameter refinements.« less
Mesh refinement strategy for optimal control problems
NASA Astrophysics Data System (ADS)
Paiva, L. T.; Fontes, F. A. C. C.
2013-10-01
Direct methods are becoming the most used technique to solve nonlinear optimal control problems. Regular time meshes having equidistant spacing are frequently used. However, in some cases these meshes cannot cope accurately with nonlinear behavior. One way to improve the solution is to select a new mesh with a greater number of nodes. Another way, involves adaptive mesh refinement. In this case, the mesh nodes have non equidistant spacing which allow a non uniform nodes collocation. In the method presented in this paper, a time mesh refinement strategy based on the local error is developed. After computing a solution in a coarse mesh, the local error is evaluated, which gives information about the subintervals of time domain where refinement is needed. This procedure is repeated until the local error reaches a user-specified threshold. The technique is applied to solve the car-like vehicle problem aiming minimum consumption. The approach developed in this paper leads to results with greater accuracy and yet with lower overall computational time as compared to using a time meshes having equidistant spacing.
Foundations for a multiscale collaborative Earth model
NASA Astrophysics Data System (ADS)
Afanasiev, Michael; Peter, Daniel; Sager, Korbinian; Simutė, Saulė; Ermert, Laura; Krischer, Lion; Fichtner, Andreas
2016-01-01
We present a computational framework for the assimilation of local to global seismic data into a consistent model describing Earth structure on all seismically accessible scales. This Collaborative Seismic Earth Model (CSEM) is designed to meet the following requirements: (i) Flexible geometric parametrization, capable of capturing topography and bathymetry, as well as all aspects of potentially resolvable structure, including small-scale heterogeneities and deformations of internal discontinuities. (ii) Independence of any particular wave equation solver, in order to enable the combination of inversion techniques suitable for different types of seismic data. (iii) Physical parametrization that allows for full anisotropy and for variations in attenuation and density. While not all of these parameters are always resolvable, the assimilation of data that constrain any parameter subset should be possible. (iv) Ability to accommodate successive refinements through the incorporation of updates on any scale as new data or inversion techniques become available. (v) Enable collaborative Earth model construction. The structure of the initial CSEM is represented on a variable-resolution tetrahedral mesh. It is assembled from a long-wavelength 3-D global model into which several regional-scale tomographies are embedded. We illustrate the CSEM workflow of successive updating with two examples from Japan and the Western Mediterranean, where we constrain smaller scale structure using full-waveform inversion. Furthermore, we demonstrate the ability of the CSEM to act as a vehicle for the combination of different tomographic techniques with a joint full-waveform and traveltime ray tomography of Europe. This combination broadens the exploitable frequency range of the individual techniques, thereby improving resolution. We perform two iterations of a whole-Earth full-waveform inversion using a long-period reference data set from 225 globally recorded earthquakes. At this early stage of the CSEM development, the broad global updates mostly act to remove artefacts from the assembly of the initial CSEM. During the future evolution of the CSEM, the reference data set will be used to account for the influence of small-scale refinements on large-scale global structure. The CSEM as a computational framework is intended to help bridging the gap between local, regional and global tomography, and to contribute to the development of a global multiscale Earth model. While the current construction serves as a first proof of concept, future refinements and additions will require community involvement, which is welcome at this stage already.
Computer aided stress analysis of long bones utilizing computer tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marom, S.A.
1986-01-01
A computer aided analysis method, utilizing computed tomography (CT) has been developed, which together with a finite element program determines the stress-displacement pattern in a long bone section. The CT data file provides the geometry, the density and the material properties for the generated finite element model. A three-dimensional finite element model of a tibial shaft is automatically generated from the CT file by a pre-processing procedure for a finite element program. The developed pre-processor includes an edge detection algorithm which determines the boundaries of the reconstructed cross-sectional images of the scanned bone. A mesh generation procedure than automatically generatesmore » a three-dimensional mesh of a user-selected refinement. The elastic properties needed for the stress analysis are individually determined for each model element using the radiographic density (CT number) of each pixel with the elemental borders. The elastic modulus is determined from the CT radiographic density by using an empirical relationship from the literature. The generated finite element model, together with applied loads, determined from existing gait analysis and initial displacements, comprise a formatted input for the SAP IV finite element program. The output of this program, stresses and displacements at the model elements and nodes, are sorted and displayed by a developed post-processor to provide maximum and minimum values at selected locations in the model.« less
Geometric Model for a Parametric Study of the Blended-Wing-Body Airplane
NASA Technical Reports Server (NTRS)
Mastin, C. Wayne; Smith, Robert E.; Sadrehaghighi, Ideen; Wiese, Micharl R.
1996-01-01
A parametric model is presented for the blended-wing-body airplane, one concept being proposed for the next generation of large subsonic transports. The model is defined in terms of a small set of parameters which facilitates analysis and optimization during the conceptual design process. The model is generated from a preliminary CAD geometry. From this geometry, airfoil cross sections are cut at selected locations and fitted with analytic curves. The airfoils are then used as boundaries for surfaces defined as the solution of partial differential equations. Both the airfoil curves and the surfaces are generated with free parameters selected to give a good representation of the original geometry. The original surface is compared with the parametric model, and solutions of the Euler equations for compressible flow are computed for both geometries. The parametric model is a good approximation of the CAD model and the computed solutions are qualitatively similar. An optimal NURBS approximation is constructed and can be used by a CAD model for further refinement or modification of the original geometry.
Ziegler, Sigurd; Pedersen, Mads L; Mowinckel, Athanasia M; Biele, Guido
2016-12-01
Attention deficit hyperactivity disorder (ADHD) is characterized by altered decision-making (DM) and reinforcement learning (RL), for which competing theories propose alternative explanations. Computational modelling contributes to understanding DM and RL by integrating behavioural and neurobiological findings, and could elucidate pathogenic mechanisms behind ADHD. This review of neurobiological theories of ADHD describes predictions for the effect of ADHD on DM and RL as described by the drift-diffusion model of DM (DDM) and a basic RL model. Empirical studies employing these models are also reviewed. While theories often agree on how ADHD should be reflected in model parameters, each theory implies a unique combination of predictions. Empirical studies agree with the theories' assumptions of a lowered DDM drift rate in ADHD, while findings are less conclusive for boundary separation. The few studies employing RL models support a lower choice sensitivity in ADHD, but not an altered learning rate. The discussion outlines research areas for further theoretical refinement in the ADHD field. Copyright © 2016 Elsevier Ltd. All rights reserved.
The application of NASCAD as a NASTRAN pre- and post-processor
NASA Technical Reports Server (NTRS)
Peltzman, Alan N.
1987-01-01
The NASA Computer Aided Design (NASCAD) graphics package provides an effective way to interactively create, view, and refine analytic data models. NASCAD's macro language, combined with its powerful 3-D geometric data base allows the user important flexibility and speed in constructing his model. This flexibility has the added benefit of enabling the user to keep pace with any new NASTRAN developments. NASCAD allows models to be conveniently viewed and plotted to best advantage in both pre- and post-process phases of development, providing useful visual feedback to the analysis process. NASCAD, used as a graphics compliment to NASTRAN, can play a valuable role in the process of finite element modeling.
New generation of elastic network models.
López-Blanco, José Ramón; Chacón, Pablo
2016-04-01
The intrinsic flexibility of proteins and nucleic acids can be grasped from remarkably simple mechanical models of particles connected by springs. In recent decades, Elastic Network Models (ENMs) combined with Normal Model Analysis widely confirmed their ability to predict biologically relevant motions of biomolecules and soon became a popular methodology to reveal large-scale dynamics in multiple structural biology scenarios. The simplicity, robustness, low computational cost, and relatively high accuracy are the reasons behind the success of ENMs. This review focuses on recent advances in the development and application of ENMs, paying particular attention to combinations with experimental data. Successful application scenarios include large macromolecular machines, structural refinement, docking, and evolutionary conservation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Computational modeling and prototyping of a pediatric airway management instrument.
Gonzalez-Cota, Alan; Kruger, Grant H; Raghavan, Padmaja; Reynolds, Paul I
2010-09-01
Anterior retraction of the tongue is used to enhance upper airway patency during pediatric fiberoptic intubation. This can be achieved by the use of Magill forceps as a tongue retractor, but lingual grip can become unsteady and traumatic. Our objective was to modify this instrument using computer-aided engineering for the purpose of stable tongue retraction. We analyzed the geometry and mechanical properties of standard Magill forceps with a combination of analytical and empirical methods. This design was captured using computer-aided design techniques to obtain a 3-dimensional model allowing further geometric refinements and mathematical testing for rapid prototyping. On the basis of our experimental findings we adjusted the design constraints to optimize the device for tongue retraction. Stereolithography prototyping was used to create a partially functional plastic model to further assess the functional and ergonomic effectiveness of the design changes. To reduce pressure on the tongue by regular Magill forceps, we incorporated (1) a larger diameter tip for better lingual tissue pressure profile, (2) a ratchet to stabilize such pressure, and (3) a soft molded tip with roughened surface to improve grip. Computer-aided engineering can be used to redesign and prototype a popular instrument used in airway management. On a computational model, our modified Magill forceps demonstrated stable retraction forces, while maintaining the original geometry and versatility. Its application in humans and utility during pediatric fiberoptic intubation are yet to be studied.
Progress in fast, accurate multi-scale climate simulations
Collins, W. D.; Johansen, H.; Evans, K. J.; ...
2015-06-01
We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koniges, A.E.; Craddock, G.G.; Schnack, D.D.
The purpose of the workshop was to assemble workers, both within and outside of the fusion-related computations areas, for discussion regarding the issues of dynamically adaptive gridding. There were three invited talks related to adaptive gridding application experiences in various related fields of computational fluid dynamics (CFD), and nine short talks reporting on the progress of adaptive techniques in the specific areas of scrape-off-layer (SOL) modeling and magnetohydrodynamic (MHD) stability. Adaptive mesh methods have been successful in a number of diverse fields of CFD for over a decade. The method involves dynamic refinement of computed field profiles in a waymore » that disperses uniformly the numerical errors associated with discrete approximations. Because the process optimizes computational effort, adaptive mesh methods can be used to study otherwise the intractable physical problems that involve complex boundary shapes or multiple spatial/temporal scales. Recent results indicate that these adaptive techniques will be required for tokamak fluid-based simulations involving the diverted tokamak SOL modeling and MHD simulations problems related to the highest priority ITER relevant issues.Individual papers are indexed separately on the energy data bases.« less
Forward and backward inference in spatial cognition.
Penny, Will D; Zeidman, Peter; Burgess, Neil
2013-01-01
This paper shows that the various computations underlying spatial cognition can be implemented using statistical inference in a single probabilistic model. Inference is implemented using a common set of 'lower-level' computations involving forward and backward inference over time. For example, to estimate where you are in a known environment, forward inference is used to optimally combine location estimates from path integration with those from sensory input. To decide which way to turn to reach a goal, forward inference is used to compute the likelihood of reaching that goal under each option. To work out which environment you are in, forward inference is used to compute the likelihood of sensory observations under the different hypotheses. For reaching sensory goals that require a chaining together of decisions, forward inference can be used to compute a state trajectory that will lead to that goal, and backward inference to refine the route and estimate control signals that produce the required trajectory. We propose that these computations are reflected in recent findings of pattern replay in the mammalian brain. Specifically, that theta sequences reflect decision making, theta flickering reflects model selection, and remote replay reflects route and motor planning. We also propose a mapping of the above computational processes onto lateral and medial entorhinal cortex and hippocampus.
Forward and Backward Inference in Spatial Cognition
Penny, Will D.; Zeidman, Peter; Burgess, Neil
2013-01-01
This paper shows that the various computations underlying spatial cognition can be implemented using statistical inference in a single probabilistic model. Inference is implemented using a common set of ‘lower-level’ computations involving forward and backward inference over time. For example, to estimate where you are in a known environment, forward inference is used to optimally combine location estimates from path integration with those from sensory input. To decide which way to turn to reach a goal, forward inference is used to compute the likelihood of reaching that goal under each option. To work out which environment you are in, forward inference is used to compute the likelihood of sensory observations under the different hypotheses. For reaching sensory goals that require a chaining together of decisions, forward inference can be used to compute a state trajectory that will lead to that goal, and backward inference to refine the route and estimate control signals that produce the required trajectory. We propose that these computations are reflected in recent findings of pattern replay in the mammalian brain. Specifically, that theta sequences reflect decision making, theta flickering reflects model selection, and remote replay reflects route and motor planning. We also propose a mapping of the above computational processes onto lateral and medial entorhinal cortex and hippocampus. PMID:24348230
Degree of quantum correlation required to speed up a computation
NASA Astrophysics Data System (ADS)
Kay, Alastair
2015-12-01
The one-clean-qubit model of quantum computation (DQC1) efficiently implements a computational task that is not known to have a classical alternative. During the computation, there is never more than a small but finite amount of entanglement present, and it is typically vanishingly small in the system size. In this paper, we demonstrate that there is nothing unexpected hidden within the DQC1 model—Grover's search, when acting on a mixed state, provably exhibits a speedup over classical, with guarantees as to the presence of only vanishingly small amounts of quantum correlations (entanglement and quantum discord)—while arguing that this is not an artifact of the oracle-based construction. We also present some important refinements in the evaluation of how much entanglement may be present in the DQC1 and how the typical entanglement of the system must be evaluated.
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.
1991-01-01
Computations from two Navier-Stokes codes, NSS and F3D, are presented for a tangent-ogive-cylinder body at high angle of attack. Features of this steady flow include a pair of primary vortices on the leeward side of the body as well as secondary vortices. The topological and physical plausibility of this vortical structure is discussed. The accuracy of these codes are assessed by comparison of the numerical solutions with experimental data. The effects of turbulence model, numerical dissipation, and grid refinement are presented. The overall efficiency of these codes are also assessed by examining their convergence rates, computational time per time step, and maximum allowable time step for time-accurate computations. Overall, the numerical results from both codes compared equally well with experimental data, however, the NSS code was found to be significantly more efficient than the F3D code.
Laser Ray Tracing in a Parallel Arbitrary Lagrangian-Eulerian Adaptive Mesh Refinement Hydrocode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masters, N D; Kaiser, T B; Anderson, R W
2009-09-28
ALE-AMR is a new hydrocode that we are developing as a predictive modeling tool for debris and shrapnel formation in high-energy laser experiments. In this paper we present our approach to implementing laser ray-tracing in ALE-AMR. We present the equations of laser ray tracing, our approach to efficient traversal of the adaptive mesh hierarchy in which we propagate computational rays through a virtual composite mesh consisting of the finest resolution representation of the modeled space, and anticipate simulations that will be compared to experiments for code validation.
Terwilliger, Thomas C; Grosse-Kunstleve, Ralf W; Afonine, Pavel V; Moriarty, Nigel W; Zwart, Peter H; Hung, Li Wei; Read, Randy J; Adams, Paul D
2008-01-01
The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 A, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution.
Thermal modeling of grinding for process optimization and durability improvements
NASA Astrophysics Data System (ADS)
Hanna, Ihab M.
Both thermal and mechanical aspects of the grinding process are investigated in detail in an effort to predict grinding induced residual stresses. An existing thermal model is used as a foundation for computing heat partitions and temperatures in surface grinding. By numerically processing data from IR temperature measurements of the grinding zone; characterizations are made of the grinding zone heat flux. It is concluded that the typical heat flux profile in the grinding zone is triangular in shape, supporting this often used assumption found in the literature. Further analyses of the computed heat flux profiles has revealed that actual grinding zone contact lengths exceed geometric contact lengths by an average of 57% for the cases considered. By integrating the resulting heat flux profiles; workpiece energy partitions are computed for several cases of dry conventional grinding of hardened steel. The average workpiece energy partition for the cases considered was 37%. In an effort to more accurately predict grinding zone temperatures and heat fluxes, refinements are made to the existing thermal model. These include consideration of contact length extensions due to local elastic deformations, variations of the assumed contact area ratio as a function of grinding process parameters, consideration of coolant latent heat of vaporization and its effect on heat transfer beyond the coolant boiling point, and incorporation of coolant-workpiece convective heat flux effects outside the grinding zone. The result of the model refinements accounting for contact length extensions and process-dependant contact area ratios is excellent agreement with IR temperature measurements over a wide range of grinding conditions. By accounting for latent heat of vaporization effects, grinding zone temperature profiles are shown to be capable of reproducing measured profiles found in the literature for cases on the verge of thermal surge conditions. Computed peak grinding zone temperatures for the aggressive grinding examples given are 30--50% lower than those computed using the existing thermal model formulation. By accounting for convective heat transfer effects outside the grinding zone, it is shown that while surface temperatures in the wake of the grinding zone may be significantly affected under highly convective conditions, computed residual stresses are less sensitive to such conditions. Numerical models are used to evaluate both thermally and mechanically induced stress fields in an elastic workpiece, while finite element modeling is used to evaluate residual stresses for workpieces with elastic-plastic material properties. Modeling of mechanical interactions at the local grit-workpiece length scale is used to create the often measured effect of compressive surface residual stress followed by a subsurface tensile peak. The model is shown to be capable of reproducing trends found in the literature of surface residual stresses which are compressive for low temperature grinding conditions, with surface stresses increasing linearly and becoming tensile with increasing temperatures. Further modifications to the finite element model are made to allow for transiently varying inputs for more complicated grinding processes of industrial components such as automotive cam lobes.
Parallel, adaptive finite element methods for conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.
1994-01-01
We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.
cryoem-cloud-tools: A software platform to deploy and manage cryo-EM jobs in the cloud.
Cianfrocco, Michael A; Lahiri, Indrajit; DiMaio, Frank; Leschziner, Andres E
2018-06-01
Access to streamlined computational resources remains a significant bottleneck for new users of cryo-electron microscopy (cryo-EM). To address this, we have developed tools that will submit cryo-EM analysis routines and atomic model building jobs directly to Amazon Web Services (AWS) from a local computer or laptop. These new software tools ("cryoem-cloud-tools") have incorporated optimal data movement, security, and cost-saving strategies, giving novice users access to complex cryo-EM data processing pipelines. Integrating these tools into the RELION processing pipeline and graphical user interface we determined a 2.2 Å structure of ß-galactosidase in ∼55 hours on AWS. We implemented a similar strategy to submit Rosetta atomic model building and refinement to AWS. These software tools dramatically reduce the barrier for entry of new users to cloud computing for cryo-EM and are freely available at cryoem-tools.cloud. Copyright © 2018. Published by Elsevier Inc.
Maximal aggregation of polynomial dynamical systems
Cardelli, Luca; Tschaikowski, Max
2017-01-01
Ordinary differential equations (ODEs) with polynomial derivatives are a fundamental tool for understanding the dynamics of systems across many branches of science, but our ability to gain mechanistic insight and effectively conduct numerical evaluations is critically hindered when dealing with large models. Here we propose an aggregation technique that rests on two notions of equivalence relating ODE variables whenever they have the same solution (backward criterion) or if a self-consistent system can be written for describing the evolution of sums of variables in the same equivalence class (forward criterion). A key feature of our proposal is to encode a polynomial ODE system into a finitary structure akin to a formal chemical reaction network. This enables the development of a discrete algorithm to efficiently compute the largest equivalence, building on approaches rooted in computer science to minimize basic models of computation through iterative partition refinements. The physical interpretability of the aggregation is shown on polynomial ODE systems for biochemical reaction networks, gene regulatory networks, and evolutionary game theory. PMID:28878023
Refining metabolic models and accounting for regulatory effects.
Kim, Joonhoon; Reed, Jennifer L
2014-10-01
Advances in genome-scale metabolic modeling allow us to investigate and engineer metabolism at a systems level. Metabolic network reconstructions have been made for many organisms and computational approaches have been developed to convert these reconstructions into predictive models. However, due to incomplete knowledge these reconstructions often have missing or extraneous components and interactions, which can be identified by reconciling model predictions with experimental data. Recent studies have provided methods to further improve metabolic model predictions by incorporating transcriptional regulatory interactions and high-throughput omics data to yield context-specific metabolic models. Here we discuss recent approaches for resolving model-data discrepancies and building context-specific metabolic models. Once developed highly accurate metabolic models can be used in a variety of biotechnology applications. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Amano, R. S.
1982-01-01
Progress in implementing and refining two near-wall turbulence models in which the near-wall region is divided into either two or three zones is outlined. These models were successfully applied to the computation of recirculating flows. The research was further extended to obtaining experimental results of two different recirculating flow conditions in order to check the validity of the present models. Two different experimental apparatuses were set up: axisymmetric turbulent impinging jets on a flat plate, and turbulent flows in a circular pipe with a abrupt pipe expansion. It is shown that generally better results are obtained by using the present near-wall models, and among the models the three-zone model is superior to the two-zone model.
Aerodynamic analysis of natural flapping flight using a lift model based on spanwise flow
NASA Astrophysics Data System (ADS)
Alford, Lionel D., Jr.
This study successfully described the mechanics of flapping hovering flight within the framework of conventional aerodynamics. Additionally, the theory proposed and supported by this research provides an entirely new way of looking at animal flapping flight. The mechanisms of biological flight are not well understood, and researchers have not been able to describe them using conventional aerodynamic forces. This study proposed that natural flapping flight can be broken down into a simplest model, that this model can then be used to develop a mathematical representation of flapping hovering flight, and finally, that the model can be successfully refined and compared to biological flapping data. This paper proposed a unique theory that the lift of a flapping animal is primarily the result of velocity across the cambered span of the wing. A force analysis was developed using centripetal acceleration to define an acceleration profile that would lead to a spanwise velocity profile. The force produced by the spanwise velocity profile was determined using a computational fluid dynamics analysis of flow on the simplified wing model. The overall forces on the model were found to produce more than twice the lift required for hovering flight. In addition, spanwise lift was shown to generate induced drag on the wing. Induced drag increased both the model wing's lift and drag. The model allowed the development of a mathematical representation that could be refined to account for insect hovering characteristics and that could predict expected physical attributes of the fluid flow. This computational representation resulted in a profile of lift and drag production that corresponds to known force profiles for insect flight. The model of flapping flight was shown to produce results similar to biological observation and experiment, and these results can potentially be applied to the study of other flapping animals. This work provides a foundation on which to base further exploration and hypotheses regarding flapping flight.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Pooja Nitin; Shin, Yung C.; Sun, Tao
Synchrotron X-rays are integrated with a modified Kolsky tension bar to conduct in situ tracking of the grain refinement mechanism operating during the dynamic deformation of metals. Copper with an initial average grain size of 36 μm is refined to 6.3 μm when loaded at a constant high strain rate of 1200 s -1. The synchrotron measurements revealed the temporal evolution of the grain refinement mechanism in terms of the initiation and rate of refinement throughout the loading test. A multiscale coupled probabilistic cellular automata based recrystallization model has been developed to predict the microstructural evolution occurring during dynamic deformationmore » processes. The model accurately predicts the initiation of the grain refinement mechanism with a predicted final average grain size of 2.4 μm. As a result, the model also accurately predicts the temporal evolution in terms of the initiation and extent of refinement when compared with the experimental results.« less
Shah, Pooja Nitin; Shin, Yung C.; Sun, Tao
2017-10-03
Synchrotron X-rays are integrated with a modified Kolsky tension bar to conduct in situ tracking of the grain refinement mechanism operating during the dynamic deformation of metals. Copper with an initial average grain size of 36 μm is refined to 6.3 μm when loaded at a constant high strain rate of 1200 s -1. The synchrotron measurements revealed the temporal evolution of the grain refinement mechanism in terms of the initiation and rate of refinement throughout the loading test. A multiscale coupled probabilistic cellular automata based recrystallization model has been developed to predict the microstructural evolution occurring during dynamic deformationmore » processes. The model accurately predicts the initiation of the grain refinement mechanism with a predicted final average grain size of 2.4 μm. As a result, the model also accurately predicts the temporal evolution in terms of the initiation and extent of refinement when compared with the experimental results.« less
In-silico experiments of zebrafish behaviour: modeling swimming in three dimensions
NASA Astrophysics Data System (ADS)
Mwaffo, Violet; Butail, Sachit; Porfiri, Maurizio
2017-01-01
Zebrafish is fast becoming a species of choice in biomedical research for the investigation of functional and dysfunctional processes coupled with their genetic and pharmacological modulation. As with mammals, experimentation with zebrafish constitutes a complicated ethical issue that calls for the exploration of alternative testing methods to reduce the number of subjects, refine experimental designs, and replace live animals. Inspired by the demonstrated advantages of computational studies in other life science domains, we establish an authentic data-driven modelling framework to simulate zebrafish swimming in three dimensions. The model encapsulates burst-and-coast swimming style, speed modulation, and wall interaction, laying the foundations for in-silico experiments of zebrafish behaviour. Through computational studies, we demonstrate the ability of the model to replicate common ethological observables such as speed and spatial preference, and anticipate experimental observations on the correlation between tank dimensions on zebrafish behaviour. Reaching to other experimental paradigms, our framework is expected to contribute to a reduction in animal use and suffering.
FUN3D and CFL3D Computations for the First High Lift Prediction Workshop
NASA Technical Reports Server (NTRS)
Park, Michael A.; Lee-Rausch, Elizabeth M.; Rumsey, Christopher L.
2011-01-01
Two Reynolds-averaged Navier-Stokes codes were used to compute flow over the NASA Trapezoidal Wing at high lift conditions for the 1st AIAA CFD High Lift Prediction Workshop, held in Chicago in June 2010. The unstructured-grid code FUN3D and the structured-grid code CFL3D were applied to several different grid systems. The effects of code, grid system, turbulence model, viscous term treatment, and brackets were studied. The SST model on this configuration predicted lower lift than the Spalart-Allmaras model at high angles of attack; the Spalart-Allmaras model agreed better with experiment. Neglecting viscous cross-derivative terms caused poorer prediction in the wing tip vortex region. Output-based grid adaptation was applied to the unstructured-grid solutions. The adapted grids better resolved wake structures and reduced flap flow separation, which was also observed in uniform grid refinement studies. Limitations of the adaptation method as well as areas for future improvement were identified.
In-silico experiments of zebrafish behaviour: modeling swimming in three dimensions
Mwaffo, Violet; Butail, Sachit; Porfiri, Maurizio
2017-01-01
Zebrafish is fast becoming a species of choice in biomedical research for the investigation of functional and dysfunctional processes coupled with their genetic and pharmacological modulation. As with mammals, experimentation with zebrafish constitutes a complicated ethical issue that calls for the exploration of alternative testing methods to reduce the number of subjects, refine experimental designs, and replace live animals. Inspired by the demonstrated advantages of computational studies in other life science domains, we establish an authentic data-driven modelling framework to simulate zebrafish swimming in three dimensions. The model encapsulates burst-and-coast swimming style, speed modulation, and wall interaction, laying the foundations for in-silico experiments of zebrafish behaviour. Through computational studies, we demonstrate the ability of the model to replicate common ethological observables such as speed and spatial preference, and anticipate experimental observations on the correlation between tank dimensions on zebrafish behaviour. Reaching to other experimental paradigms, our framework is expected to contribute to a reduction in animal use and suffering. PMID:28071731
Ensembler: Enabling High-Throughput Molecular Simulations at the Superfamily Scale.
Parton, Daniel L; Grinaway, Patrick B; Hanson, Sonya M; Beauchamp, Kyle A; Chodera, John D
2016-06-01
The rapidly expanding body of available genomic and protein structural data provides a rich resource for understanding protein dynamics with biomolecular simulation. While computational infrastructure has grown rapidly, simulations on an omics scale are not yet widespread, primarily because software infrastructure to enable simulations at this scale has not kept pace. It should now be possible to study protein dynamics across entire (super)families, exploiting both available structural biology data and conformational similarities across homologous proteins. Here, we present a new tool for enabling high-throughput simulation in the genomics era. Ensembler takes any set of sequences-from a single sequence to an entire superfamily-and shepherds them through various stages of modeling and refinement to produce simulation-ready structures. This includes comparative modeling to all relevant PDB structures (which may span multiple conformational states of interest), reconstruction of missing loops, addition of missing atoms, culling of nearly identical structures, assignment of appropriate protonation states, solvation in explicit solvent, and refinement and filtering with molecular simulation to ensure stable simulation. The output of this pipeline is an ensemble of structures ready for subsequent molecular simulations using computer clusters, supercomputers, or distributed computing projects like Folding@home. Ensembler thus automates much of the time-consuming process of preparing protein models suitable for simulation, while allowing scalability up to entire superfamilies. A particular advantage of this approach can be found in the construction of kinetic models of conformational dynamics-such as Markov state models (MSMs)-which benefit from a diverse array of initial configurations that span the accessible conformational states to aid sampling. We demonstrate the power of this approach by constructing models for all catalytic domains in the human tyrosine kinase family, using all available kinase catalytic domain structures from any organism as structural templates. Ensembler is free and open source software licensed under the GNU General Public License (GPL) v2. It is compatible with Linux and OS X. The latest release can be installed via the conda package manager, and the latest source can be downloaded from https://github.com/choderalab/ensembler.
Design of cryogenic tanks for launch vehicles
NASA Technical Reports Server (NTRS)
Copper, Charles; Pilkey, Walter D.; Haviland, John K.
1990-01-01
During the period since January 1990, work was concentrated on the problem of the buckling of the structure of an ALS (advanced launch systems) tank during the boost phase. The primary problem was to analyze a proposed hat stringer made by superplastic forming, and to compare it with an integrally stiffened stringer design. A secondary objective was to determine whether structural rings having the identical section to the stringers will provide adequate support against overall buckling. All of the analytical work was carried out with the TESTBED program on the CONVEX computer, using PATRAN programs to create models. Analyses of skin/stringer combinations have shown that the proposed stringer design is an adequate substitute for the integrally stiffened stringer. Using a highly refined mesh to represent the corrugations in the vertical webs of the hat stringers, effective values were obtained for cross-sectional area, moment of inertia, centroid height, and torsional constant. Not only can these values be used for comparison with experimental values, but they can also be used for beams to replace the stringers and frames in analytical models of complete sections of tank. The same highly refined model was used to represent a section of skin reinforced by a stringer and a ring segment in the configuration of a cross. It was intended that this would provide a baseline buckling analysis representing a basic mode, however, the analysis proved to be beyond the scope of the CONVEX computer. One quarter of this model was analyzed, however, to provide information on buckling between the spot welds. Models of large sections of the tank structure were made, using beam elements to model the stringers and frames. In order to represent the stiffening effects of pressure, stresses and deflections under pressure should first be obtained, and then the buckling analysis should be made on the structure so deflected. So far, uncharacteristic deflections under pressure were obtained from the TESTBED program using two types of structural elements. Similar results were obtained using the ANSYS program on a mainframe computer, although two finite element programs on microcomputers have yielded realistic results.
NASA Astrophysics Data System (ADS)
Arko, Bryan M.
Design trends for the low-pressure turbine (LPT) section of modern gas turbine engines include increasing the loading per airfoil, which promises a decreased airfoil count resulting in reduced manufacturing and operating costs. Accurate Reynolds-Averaged Navier-Stokes predictions of separated boundary layers and transition to turbulence are needed, as the lack of an economical and reliable computational model has contributed to this high-lift concept not reaching its full potential. Presented here for what is believed to be the first time applied to low-Re computations of high-lift linear cascade simulations is the Abe-Kondoh-Nagano (AKN) linear low-Re two-equation turbulence model which utilizes the Kolmogorov velocity scale for improved predictions of separated boundary layers. A second turbulence model investigated is the Kato-Launder modified version of the AKN, denoted MPAKN, which damps turbulent production in highly strained regions of flow. Fully Laminar solutions have also been calculated in an effort to elucidate the transitional quality of the turbulence model solutions. Time accurate simulations of three modern high-lift blades at a Reynolds number of 25,000 are compared to experimental data and higher-order computations in order to judge the accuracy of the results, where it is shown that the RANS simulations with highly refined grids can produce both quantitatively and qualitatively similar separation behavior as found in experiments. In particular, the MPAKN model is shown to predict the correct boundary layer behavior for all three blades, and evidence of transition is found through inspection of the components of the Reynolds Stress Tensor, spectral analysis, and the turbulence production parameter. Unfortunately, definitively stating that transition is occurring becomes an uncertain task, as similar evidence of the transition process is found in the Laminar predictions. This reveals that boundary layer reattachment may be a result of laminar vortex shedding interaction with the blade surface, or that application of Laminar solvers with highly refined grids may be valid for investigations of transitional flow. It is recommended that this be studied in the future along with further studies of the turbulence models presented, which have proved to be worthy of additional study in the present application.
Efficient Grammar Induction Algorithm with Parse Forests from Real Corpora
NASA Astrophysics Data System (ADS)
Kurihara, Kenichi; Kameya, Yoshitaka; Sato, Taisuke
The task of inducing grammar structures has received a great deal of attention. The reasons why researchers have studied are different; to use grammar induction as the first stage in building large treebanks or to make up better language models. However, grammar induction has inherent computational complexity. To overcome it, some grammar induction algorithms add new production rules incrementally. They refine the grammar while keeping their computational complexity low. In this paper, we propose a new efficient grammar induction algorithm. Although our algorithm is similar to algorithms which learn a grammar incrementally, our algorithm uses the graphical EM algorithm instead of the Inside-Outside algorithm. We report results of learning experiments in terms of learning speeds. The results show that our algorithm learns a grammar in constant time regardless of the size of the grammar. Since our algorithm decreases syntactic ambiguities in each step, our algorithm reduces required time for learning. This constant-time learning considerably affects learning time for larger grammars. We also reports results of evaluation of criteria to choose nonterminals. Our algorithm refines a grammar based on a nonterminal in each step. Since there can be several criteria to decide which nonterminal is the best, we evaluate them by learning experiments.
Statistical characterization of planar two-dimensional Rayleigh-Taylor mixing layers
NASA Astrophysics Data System (ADS)
Sendersky, Dmitry
2000-10-01
The statistical evolution of a planar, randomly perturbed fluid interface subject to Rayleigh-Taylor instability is explored through numerical simulation in two space dimensions. The data set, generated by the front-tracking code FronTier, is highly resolved and covers a large ensemble of initial perturbations, allowing a more refined analysis of closure issues pertinent to the stochastic modeling of chaotic fluid mixing. We closely approach a two-fold convergence of the mean two-phase flow: convergence of the numerical solution under computational mesh refinement, and statistical convergence under increasing ensemble size. Quantities that appear in the two-phase averaged Euler equations are computed directly and analyzed for numerical and statistical convergence. Bulk averages show a high degree of convergence, while interfacial averages are convergent only in the outer portions of the mixing zone, where there is a coherent array of bubble and spike tips. Comparison with the familiar bubble/spike penetration law h = alphaAgt 2 is complicated by the lack of scale invariance, inability to carry the simulations to late time, the increasing Mach numbers of the bubble/spike tips, and sensitivity to the method of data analysis. Finally, we use the simulation data to analyze some constitutive properties of the mixing process.
Computational theory of line drawing interpretation
NASA Technical Reports Server (NTRS)
Witkin, A. P.
1981-01-01
The recovery of the three dimensional structure of visible surfaces depicted in an image by emphasizing the role of geometric cues present in line drawings, was studied. Three key components are line classification, line interpretation, and surface interpolation. A model for three dimensional line interpretation and surface orientation was refined and a theory for the recovery of surface shape from surface marking geometry was developed. A new approach to the classification of edges was developed and implemented signatures were deduced for each of several edge types, expressed in terms of correlational properties of the image intensities in the vicinity of the edge. A computer program was developed that evaluates image edges as compared with these prototype signatures.
Applying Parallel Adaptive Methods with GeoFEST/PYRAMID to Simulate Earth Surface Crustal Dynamics
NASA Technical Reports Server (NTRS)
Norton, Charles D.; Lyzenga, Greg; Parker, Jay; Glasscoe, Margaret; Donnellan, Andrea; Li, Peggy
2006-01-01
This viewgraph presentation reviews the use Adaptive Mesh Refinement (AMR) in simulating the Crustal Dynamics of Earth's Surface. AMR simultaneously improves solution quality, time to solution, and computer memory requirements when compared to generating/running on a globally fine mesh. The use of AMR in simulating the dynamics of the Earth's Surface is spurred by future proposed NASA missions, such as InSAR for Earth surface deformation and other measurements. These missions will require support for large-scale adaptive numerical methods using AMR to model observations. AMR was chosen because it has been successful in computation fluid dynamics for predictive simulation of complex flows around complex structures.
Computational and Statistical Models: A Comparison for Policy Modeling of Childhood Obesity
NASA Astrophysics Data System (ADS)
Mabry, Patricia L.; Hammond, Ross; Ip, Edward Hak-Sing; Huang, Terry T.-K.
As systems science methodologies have begun to emerge as a set of innovative approaches to address complex problems in behavioral, social science, and public health research, some apparent conflicts with traditional statistical methodologies for public health have arisen. Computational modeling is an approach set in context that integrates diverse sources of data to test the plausibility of working hypotheses and to elicit novel ones. Statistical models are reductionist approaches geared towards proving the null hypothesis. While these two approaches may seem contrary to each other, we propose that they are in fact complementary and can be used jointly to advance solutions to complex problems. Outputs from statistical models can be fed into computational models, and outputs from computational models can lead to further empirical data collection and statistical models. Together, this presents an iterative process that refines the models and contributes to a greater understanding of the problem and its potential solutions. The purpose of this panel is to foster communication and understanding between statistical and computational modelers. Our goal is to shed light on the differences between the approaches and convey what kinds of research inquiries each one is best for addressing and how they can serve complementary (and synergistic) roles in the research process, to mutual benefit. For each approach the panel will cover the relevant "assumptions" and how the differences in what is assumed can foster misunderstandings. The interpretations of the results from each approach will be compared and contrasted and the limitations for each approach will be delineated. We will use illustrative examples from CompMod, the Comparative Modeling Network for Childhood Obesity Policy. The panel will also incorporate interactive discussions with the audience on the issues raised here.
Insights into channel dysfunction from modelling and molecular dynamics simulations.
Musgaard, Maria; Paramo, Teresa; Domicevica, Laura; Andersen, Ole Juul; Biggin, Philip C
2018-04-01
Developments in structural biology mean that the number of different ion channel structures has increased significantly in recent years. Structures of ion channels enable us to rationalize how mutations may lead to channelopathies. However, determining the structures of ion channels is still not trivial, especially as they necessarily exist in many distinct functional states. Therefore, the use of computational modelling can provide complementary information that can refine working hypotheses of both wild type and mutant ion channels. The simplest but still powerful tool is homology modelling. Many structures are available now that can provide suitable templates for many different types of ion channels, allowing a full three-dimensional interpretation of mutational effects. These structural models, and indeed the structures themselves obtained by X-ray crystallography, and more recently cryo-electron microscopy, can be subjected to molecular dynamics simulations, either as a tool to help explore the conformational dynamics in detail or simply as a means to refine the models further. Here we review how these approaches have been used to improve our understanding of how diseases might be linked to specific mutations in ion channel proteins. This article is part of the Special Issue entitled 'Channelopathies.' Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Using APEX to Model Anticipated Human Error: Analysis of a GPS Navigational Aid
NASA Technical Reports Server (NTRS)
VanSelst, Mark; Freed, Michael; Shefto, Michael (Technical Monitor)
1997-01-01
The interface development process can be dramatically improved by predicting design facilitated human error at an early stage in the design process. The approach we advocate is to SIMULATE the behavior of a human agent carrying out tasks with a well-specified user interface, ANALYZE the simulation for instances of human error, and then REFINE the interface or protocol to minimize predicted error. This approach, incorporated into the APEX modeling architecture, differs from past approaches to human simulation in Its emphasis on error rather than e.g. learning rate or speed of response. The APEX model consists of two major components: (1) a powerful action selection component capable of simulating behavior in complex, multiple-task environments; and (2) a resource architecture which constrains cognitive, perceptual, and motor capabilities to within empirically demonstrated limits. The model mimics human errors arising from interactions between limited human resources and elements of the computer interface whose design falls to anticipate those limits. We analyze the design of a hand-held Global Positioning System (GPS) device used for radical and navigational decisions in small yacht recalls. The analysis demonstrates how human system modeling can be an effective design aid, helping to accelerate the process of refining a product (or procedure).
NASA Astrophysics Data System (ADS)
Semplice, Matteo; Loubère, Raphaël
2018-02-01
In this paper we propose a third order accurate finite volume scheme based on a posteriori limiting of polynomial reconstructions within an Adaptive-Mesh-Refinement (AMR) simulation code for hydrodynamics equations in 2D. The a posteriori limiting is based on the detection of problematic cells on a so-called candidate solution computed at each stage of a third order Runge-Kutta scheme. Such detection may include different properties, derived from physics, such as positivity, from numerics, such as a non-oscillatory behavior, or from computer requirements such as the absence of NaN's. Troubled cell values are discarded and re-computed starting again from the previous time-step using a more dissipative scheme but only locally, close to these cells. By locally decrementing the degree of the polynomial reconstructions from 2 to 0 we switch from a third-order to a first-order accurate but more stable scheme. The entropy indicator sensor is used to refine/coarsen the mesh. This sensor is also employed in an a posteriori manner because if some refinement is needed at the end of a time step, then the current time-step is recomputed with the refined mesh, but only locally, close to the new cells. We show on a large set of numerical tests that this a posteriori limiting procedure coupled with the entropy-based AMR technology can maintain not only optimal accuracy on smooth flows but also stability on discontinuous profiles such as shock waves, contacts, interfaces, etc. Moreover numerical evidences show that this approach is at least comparable in terms of accuracy and cost to a more classical CWENO approach within the same AMR context.
Mehl, S.; Hill, M.C.
2002-01-01
Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and the performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are: (a) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed, and (b) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.
Mehl, S.; Hill, M.C.
2002-01-01
Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are (1) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed and (2) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.
Valentin, J; Sprenger, M; Pflüger, D; Röhrle, O
2018-05-01
Investigating the interplay between muscular activity and motion is the basis to improve our understanding of healthy or diseased musculoskeletal systems. To be able to analyze the musculoskeletal systems, computational models are used. Albeit some severe modeling assumptions, almost all existing musculoskeletal system simulations appeal to multibody simulation frameworks. Although continuum-mechanical musculoskeletal system models can compensate for some of these limitations, they are essentially not considered because of their computational complexity and cost. The proposed framework is the first activation-driven musculoskeletal system model, in which the exerted skeletal muscle forces are computed using 3-dimensional, continuum-mechanical skeletal muscle models and in which muscle activations are determined based on a constraint optimization problem. Numerical feasibility is achieved by computing sparse grid surrogates with hierarchical B-splines, and adaptive sparse grid refinement further reduces the computational effort. The choice of B-splines allows the use of all existing gradient-based optimization techniques without further numerical approximation. This paper demonstrates that the resulting surrogates have low relative errors (less than 0.76%) and can be used within forward simulations that are subject to constraint optimization. To demonstrate this, we set up several different test scenarios in which an upper limb model consisting of the elbow joint, the biceps and triceps brachii, and an external load is subjected to different optimization criteria. Even though this novel method has only been demonstrated for a 2-muscle system, it can easily be extended to musculoskeletal systems with 3 or more muscles. Copyright © 2018 John Wiley & Sons, Ltd.
Computations of Aerodynamic Performance Databases Using Output-Based Refinement
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.
2009-01-01
Objectives: Handle complex geometry problems; Control discretization errors via solution-adaptive mesh refinement; Focus on aerodynamic databases of parametric and optimization studies: 1. Accuracy: satisfy prescribed error bounds 2. Robustness and speed: may require over 105 mesh generations 3. Automation: avoid user supervision Obtain "expert meshes" independent of user skill; and Run every case adaptively in production settings.
NASA Astrophysics Data System (ADS)
Petersson, Anders; Rodgers, Arthur
2010-05-01
The finite difference method on a uniform Cartesian grid is a highly efficient and easy to implement technique for solving the elastic wave equation in seismic applications. However, the spacing in a uniform Cartesian grid is fixed throughout the computational domain, whereas the resolution requirements in realistic seismic simulations usually are higher near the surface than at depth. This can be seen from the well-known formula h ≤ L-P which relates the grid spacing h to the wave length L, and the required number of grid points per wavelength P for obtaining an accurate solution. The compressional and shear wave lengths in the earth generally increase with depth and are often a factor of ten larger below the Moho discontinuity (at about 30 km depth), than in sedimentary basins near the surface. A uniform grid must have a grid spacing based on the small wave lengths near the surface, which results in over-resolving the solution at depth. As a result, the number of points in a uniform grid is unnecessarily large. In the wave propagation project (WPP) code, we address the over-resolution-at-depth issue by generalizing our previously developed single grid finite difference scheme to work on a composite grid consisting of a set of structured rectangular grids of different spacings, with hanging nodes on the grid refinement interfaces. The computational domain in a regional seismic simulation often extends to depth 40-50 km. Hence, using a refinement ratio of two, we need about three grid refinements from the bottom of the computational domain to the surface, to keep the local grid size in approximate parity with the local wave lengths. The challenge of the composite grid approach is to find a stable and accurate method for coupling the solution across the grid refinement interface. Of particular importance is the treatment of the solution at the hanging nodes, i.e., the fine grid points which are located in between coarse grid points. WPP implements a new, energy conserving, coupling procedure for the elastic wave equation at grid refinement interfaces. When used together with our single grid finite difference scheme, it results in a method which is provably stable, without artificial dissipation, for arbitrary heterogeneous isotropic elastic materials. The new coupling procedure is based on satisfying the summation-by-parts principle across refinement interfaces. From a practical standpoint, an important advantage of the proposed method is the absence of tunable numerical parameters, which seldom are appreciated by application experts. In WPP, the composite grid discretization is combined with a curvilinear grid approach that enables accurate modeling of free surfaces on realistic (non-planar) topography. The overall method satisfies the summation-by-parts principle and is stable under a CFL time step restriction. A feature of great practical importance is that WPP automatically generates the composite grid based on the user provided topography and the depths of the grid refinement interfaces. The WPP code has been verified extensively, for example using the method of manufactured solutions, by solving Lamb's problem, by solving various layer over half- space problems and comparing to semi-analytic (FK) results, and by simulating scenario earthquakes where results from other seismic simulation codes are available. WPP has also been validated against seismographic recordings of moderate earthquakes. WPP performs well on large parallel computers and has been run on up to 32,768 processors using about 26 Billion grid points (78 Billion DOF) and 41,000 time steps. WPP is an open source code that is available under the Gnu general public license.
An adaptive Gaussian process-based iterative ensemble smoother for data assimilation
NASA Astrophysics Data System (ADS)
Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao
2018-05-01
Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.
NASA Astrophysics Data System (ADS)
Vintila, Iuliana; Gavrus, Adinel
2017-10-01
The present research paper proposes the validation of a rigorous computation model used as a numerical tool to identify rheological behavior of complex emulsions W/O. Considering a three-dimensional description of a general viscoplastic flow it is detailed the thermo-mechanical equations used to identify fluid or soft material's rheological laws starting from global experimental measurements. Analyses are conducted for complex emulsions W/O having generally a Bingham behavior using the shear stress - strain rate dependency based on a power law and using an improved analytical model. Experimental results are investigated in case of rheological behavior for crude and refined rapeseed/soybean oils and four types of corresponding W/O emulsions using different physical-chemical composition. The rheological behavior model was correlated with the thermo-mechanical analysis of a plane-plane rheometer, oil content, chemical composition, particle size and emulsifier's concentration. The parameters of rheological laws describing the industrial oils and the W/O concentrated emulsions behavior were computed from estimated shear stresses using a non-linear regression technique and from experimental torques using the inverse analysis tool designed by A. Gavrus (1992-2000).
Large-eddy simulation of wind turbine wake interactions on locally refined Cartesian grids
NASA Astrophysics Data System (ADS)
Angelidis, Dionysios; Sotiropoulos, Fotis
2014-11-01
Performing high-fidelity numerical simulations of turbulent flow in wind farms remains a challenging issue mainly because of the large computational resources required to accurately simulate the turbine wakes and turbine/turbine interactions. The discretization of the governing equations on structured grids for mesoscale calculations may not be the most efficient approach for resolving the large disparity of spatial scales. A 3D Cartesian grid refinement method enabling the efficient coupling of the Actuator Line Model (ALM) with locally refined unstructured Cartesian grids adapted to accurately resolve tip vortices and multi-turbine interactions, is presented. Second order schemes are employed for the discretization of the incompressible Navier-Stokes equations in a hybrid staggered/non-staggered formulation coupled with a fractional step method that ensures the satisfaction of local mass conservation to machine zero. The current approach enables multi-resolution LES of turbulent flow in multi-turbine wind farms. The numerical simulations are in good agreement with experimental measurements and are able to resolve the rich dynamics of turbine wakes on grids containing only a small fraction of the grid nodes that would be required in simulations without local mesh refinement. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the National Science Foundation under Award number NSF PFI:BIC 1318201.
Capturing Multiscale Phenomena via Adaptive Mesh Refinement (AMR) in 2D and 3D Atmospheric Flows
NASA Astrophysics Data System (ADS)
Ferguson, J. O.; Jablonowski, C.; Johansen, H.; McCorquodale, P.; Ullrich, P. A.; Langhans, W.; Collins, W. D.
2017-12-01
Extreme atmospheric events such as tropical cyclones are inherently complex multiscale phenomena. Such phenomena are a challenge to simulate in conventional atmosphere models, which typically use rather coarse uniform-grid resolutions. To enable study of these systems, Adaptive Mesh Refinement (AMR) can provide sufficient local resolution by dynamically placing high-resolution grid patches selectively over user-defined features of interest, such as a developing cyclone, while limiting the total computational burden of requiring such high-resolution globally. This work explores the use of AMR with a high-order, non-hydrostatic, finite-volume dynamical core, which uses the Chombo AMR library to implement refinement in both space and time on a cubed-sphere grid. The characteristics of the AMR approach are demonstrated via a series of idealized 2D and 3D test cases designed to mimic atmospheric dynamics and multiscale flows. In particular, new shallow-water test cases with forcing mechanisms are introduced to mimic the strengthening of tropical cyclone-like vortices and to include simplified moisture and convection processes. The forced shallow-water experiments quantify the improvements gained from AMR grids, assess how well transient features are preserved across grid boundaries, and determine effective refinement criteria. In addition, results from idealized 3D test cases are shown to characterize the accuracy and stability of the non-hydrostatic 3D AMR dynamical core.
Homology‐based hydrogen bond information improves crystallographic structures in the PDB
van Beusekom, Bart; Touw, Wouter G.; Tatineni, Mahidhar; Somani, Sandeep; Rajagopal, Gunaretnam; Luo, Jinquan; Gilliland, Gary L.; Perrakis, Anastassis
2017-01-01
Abstract The Protein Data Bank (PDB) is the global archive for structural information on macromolecules, and a popular resource for researchers, teachers, and students, amassing more than one million unique users each year. Crystallographic structure models in the PDB (more than 100,000 entries) are optimized against the crystal diffraction data and geometrical restraints. This process of crystallographic refinement typically ignored hydrogen bond (H‐bond) distances as a source of information. However, H‐bond restraints can improve structures at low resolution where diffraction data are limited. To improve low‐resolution structure refinement, we present methods for deriving H‐bond information either globally from well‐refined high‐resolution structures from the PDB‐REDO databank, or specifically from on‐the‐fly constructed sets of homologous high‐resolution structures. Refinement incorporating HOmology DErived Restraints (HODER), improves geometrical quality and the fit to the diffraction data for many low‐resolution structures. To make these improvements readily available to the general public, we applied our new algorithms to all crystallographic structures in the PDB: using massively parallel computing, we constructed a new instance of the PDB‐REDO databank (https://pdb-redo.eu). This resource is useful for researchers to gain insight on individual structures, on specific protein families (as we demonstrate with examples), and on general features of protein structure using data mining approaches on a uniformly treated dataset. PMID:29168245
Homology-based hydrogen bond information improves crystallographic structures in the PDB.
van Beusekom, Bart; Touw, Wouter G; Tatineni, Mahidhar; Somani, Sandeep; Rajagopal, Gunaretnam; Luo, Jinquan; Gilliland, Gary L; Perrakis, Anastassis; Joosten, Robbie P
2018-03-01
The Protein Data Bank (PDB) is the global archive for structural information on macromolecules, and a popular resource for researchers, teachers, and students, amassing more than one million unique users each year. Crystallographic structure models in the PDB (more than 100,000 entries) are optimized against the crystal diffraction data and geometrical restraints. This process of crystallographic refinement typically ignored hydrogen bond (H-bond) distances as a source of information. However, H-bond restraints can improve structures at low resolution where diffraction data are limited. To improve low-resolution structure refinement, we present methods for deriving H-bond information either globally from well-refined high-resolution structures from the PDB-REDO databank, or specifically from on-the-fly constructed sets of homologous high-resolution structures. Refinement incorporating HOmology DErived Restraints (HODER), improves geometrical quality and the fit to the diffraction data for many low-resolution structures. To make these improvements readily available to the general public, we applied our new algorithms to all crystallographic structures in the PDB: using massively parallel computing, we constructed a new instance of the PDB-REDO databank (https://pdb-redo.eu). This resource is useful for researchers to gain insight on individual structures, on specific protein families (as we demonstrate with examples), and on general features of protein structure using data mining approaches on a uniformly treated dataset. © 2017 The Protein Society.
Center for Extended Magnetohydrodynamic Modeling Cooperative Agreement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carl R. Sovinec
The Center for Extended Magnetohydrodynamic Modeling (CEMM) is developing computer simulation models for predicting the behavior of magnetically confined plasmas. Over the first phase of support from the Department of Energy’s Scientific Discovery through Advanced Computing (SciDAC) initiative, the focus has been on macroscopic dynamics that alter the confinement properties of magnetic field configurations. The ultimate objective is to provide computational capabilities to predict plasma behavior—not unlike computational weather prediction—to optimize performance and to increase the reliability of magnetic confinement for fusion energy. Numerical modeling aids theoretical research by solving complicated mathematical models of plasma behavior including strong nonlinear effectsmore » and the influences of geometrical shaping of actual experiments. The numerical modeling itself remains an area of active research, due to challenges associated with simulating multiple temporal and spatial scales. The research summarized in this report spans computational and physical topics associated with state of the art simulation of magnetized plasmas. The tasks performed for this grant are categorized according to whether they are primarily computational, algorithmic, or application-oriented in nature. All involve the development and use of the Non-Ideal Magnetohydrodynamics with Rotation, Open Discussion (NIMROD) code, which is described at http://nimrodteam.org. With respect to computation, we have tested and refined methods for solving the large algebraic systems of equations that result from our numerical approximations of the physical model. Collaboration with the Terascale Optimal PDE Solvers (TOPS) SciDAC center led us to the SuperLU_DIST software library [http://crd.lbl.gov/~xiaoye/SuperLU/] for solving large sparse matrices using direct methods on parallel computers. Switching to this solver library boosted NIMROD’s performance by a factor of five in typical large nonlinear simulations, which has been publicized as a success story of SciDAC-fostered collaboration. Furthermore, the SuperLU software does not assume any mathematical symmetry, and its generality provides an important capability for extending the physical model beyond magnetohydrodynamics (MHD). With respect to algorithmic and model development, our most significant accomplishment is the development of a new method for solving plasma models that treat electrons as an independent plasma component. These ‘two-fluid’ models encompass MHD and add temporal and spatial scales that are beyond the response of the ion species. Implementation and testing of a previously published algorithm did not prove successful for NIMROD, and the new algorithm has since been devised, analyzed, and implemented. Two-fluid modeling, an important objective of the original NIMROD project, is now routine in 2D applications. Algorithmic components for 3D modeling are in place and tested; though, further computational work is still needed for efficiency. Other algorithmic work extends the ion-fluid stress tensor to include models for parallel and gyroviscous stresses. In addition, our hot-particle simulation capability received important refinements that permitted completion of a benchmark with the M3D code. A highlight of our applications work is the edge-localized mode (ELM) modeling, which was part of the first-ever computational Performance Target for the DOE Office of Fusion Energy Science, see http://www.science.doe.gov/ofes/performancetargets.shtml. Our efforts allowed MHD simulations to progress late into the nonlinear stage, where energy is conducted to the wall location. They also produced a two-fluid ELM simulation starting from experimental information and demonstrating critical drift effects that are characteristic of two-fluid physics. Another important application is the internal kink mode in a tokamak. Here, the primary purpose of the study has been to benchmark the two main code development lines of CEMM, NIMROD and M3D, on a relevant nonlinear problem. Results from the two codes show repeating nonlinear relaxation events driven by the kink mode over quantitatively comparable timescales. The work has launched a more comprehensive nonlinear benchmarking exercise, where realistic transport effects have an important role.« less
NASA Astrophysics Data System (ADS)
Cao, Qing; Nastac, Laurentiu; Pitts-Baggett, April; Yu, Qiulin
2018-03-01
A quick modeling analysis approach for predicting the slag-steel reaction and desulfurization kinetics in argon gas-stirred ladles has been developed in this study. The model consists of two uncoupled components: (i) a computational fluid dynamics (CFD) model for predicting the fluid flow and the characteristics of slag-steel interface, and (ii) a multicomponent reaction kinetics model for calculating the desulfurization evolution. The steel-slag interfacial area and mass transfer coefficients predicted by the CFD simulation are used as the processing data for the reaction model. Since the desulfurization predictions are uncoupled from the CFD simulation, the computational time of this uncoupled predictive approach is decreased by at least 100 times for each case study when compared with the CFD-reaction kinetics fully coupled model. The uncoupled modeling approach was validated by comparing the evolution of steel and slag compositions with the experimentally measured data during ladle metallurgical furnace (LMF) processing at Nucor Steel Tuscaloosa, Inc. Then, the validated approach was applied to investigate the effects of the initial steel and slag compositions, as well as different types of additions during the refining process on the desulfurization efficiency. The results revealed that the sulfur distribution ratio and the desulfurization reaction can be promoted by making Al and CaO additions during the refining process. It was also shown that by increasing the initial Al content in liquid steel, both Al oxidation and desulfurization rates rapidly increase. In addition, it was found that the variation of the initial Si content in steel has no significant influence on the desulfurization rate. Lastly, if the initial CaO content in slag is increased or the initial Al2O3 content is decreased in the fluid-slag compositional range, the desulfurization rate can be improved significantly during the LMF process.
NASA Astrophysics Data System (ADS)
Cao, Qing; Nastac, Laurentiu; Pitts-Baggett, April; Yu, Qiulin
2018-06-01
A quick modeling analysis approach for predicting the slag-steel reaction and desulfurization kinetics in argon gas-stirred ladles has been developed in this study. The model consists of two uncoupled components: (i) a computational fluid dynamics (CFD) model for predicting the fluid flow and the characteristics of slag-steel interface, and (ii) a multicomponent reaction kinetics model for calculating the desulfurization evolution. The steel-slag interfacial area and mass transfer coefficients predicted by the CFD simulation are used as the processing data for the reaction model. Since the desulfurization predictions are uncoupled from the CFD simulation, the computational time of this uncoupled predictive approach is decreased by at least 100 times for each case study when compared with the CFD-reaction kinetics fully coupled model. The uncoupled modeling approach was validated by comparing the evolution of steel and slag compositions with the experimentally measured data during ladle metallurgical furnace (LMF) processing at Nucor Steel Tuscaloosa, Inc. Then, the validated approach was applied to investigate the effects of the initial steel and slag compositions, as well as different types of additions during the refining process on the desulfurization efficiency. The results revealed that the sulfur distribution ratio and the desulfurization reaction can be promoted by making Al and CaO additions during the refining process. It was also shown that by increasing the initial Al content in liquid steel, both Al oxidation and desulfurization rates rapidly increase. In addition, it was found that the variation of the initial Si content in steel has no significant influence on the desulfurization rate. Lastly, if the initial CaO content in slag is increased or the initial Al2O3 content is decreased in the fluid-slag compositional range, the desulfurization rate can be improved significantly during the LMF process.
1990-01-01
PERFORMED BY: In-house efforts accomplished by Program Executive Officer for Air De - fense Systems, Program Manager-Line of Sight-Forward- Heavy and U.S...evaluation of mechanisms involved in the recovery of heavy metals from waste sludges * (U) Complete determination of basic mechanisms responsible for...tities for characterization " (U) Refined computer model for design of effective heavy metal spin-insensitive EFP war- head liner * (U) Identified
Characterizing and modeling organic binder burnout from green ceramic compacts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewsuk, K.G.; Cesarano, J. III; Cochran, R.J.
New characterization and computational techniques have been developed to evaluate and simulate binder burnout from pressed powder compacts. Using engineering data and a control volume finite element method (CVFEM) thermal model, a nominally one dimensional (1-D) furnace has been designed to test, refine, and validate computer models that simulate binder burnout assuming a 1-D thermal gradient across the ceramic body during heating. Experimentally, 1-D radial heat flow was achieved using a rod-shaped heater that directly heats the inside surface of a stack of ceramic annuli surrounded by thermal insulation. The computational modeling effort focused on producing a macroscopic model formore » binder burnout based on continuum approaches to heat and mass conservation for porous media. Two increasingly complex models have been developed that predict the temperature and mass of a porous powder compact as a function of time during binder burnout. The more complex model also predicts the pressure within a powder compact during binder burnout. Model predictions are in reasonably good agreement with experimental data on binder burnout from a 57--65% relative density pressed powder compact of a 94 wt% alumina body containing {approximately}3 wt% binder. In conjunction with the detailed experimental data from the prototype binder burnout furnace, the models have also proven useful for conducting parametric studies to elucidate critical i-material property data required to support model development.« less
Orion Service Module Reaction Control System Plume Impingement Analysis Using PLIMP/RAMP2
NASA Technical Reports Server (NTRS)
Wang, Xiao-Yen J.; Gati, Frank; Yuko, James R.; Motil, Brian J.; Lumpkin, Forrest E.
2009-01-01
The Orion Crew Exploration Vehicle Service Module Reaction Control System engine plume impingement was computed using the plume impingement program (PLIMP). PLIMP uses the plume solution from RAMP2, which is the refined version of the reacting and multiphase program (RAMP) code. The heating rate and pressure (force and moment) on surfaces or components of the Service Module were computed. The RAMP2 solution of the flow field inside the engine and the plume was compared with those computed using GASP, a computational fluid dynamics code, showing reasonable agreement. The computed heating rate and pressure using PLIMP were compared with the Reaction Control System plume model (RPM) solution and the plume impingement dynamics (PIDYN) solution. RPM uses the GASP-based plume solution, whereas PIDYN uses the SCARF plume solution. Three sets of the heating rate and pressure solutions agree well. Further thermal analysis on the avionic ring of the Service Module showed that thermal protection is necessary because of significant heating from the plume.
Systems and methods for predicting materials properties
Ceder, Gerbrand; Fischer, Chris; Tibbetts, Kevin; Morgan, Dane; Curtarolo, Stefano
2007-11-06
Systems and methods for predicting features of materials of interest. Reference data are analyzed to deduce relationships between the input data sets and output data sets. Reference data includes measured values and/or computed values. The deduced relationships can be specified as equations, correspondences, and/or algorithmic processes that produce appropriate output data when suitable input data is used. In some instances, the output data set is a subset of the input data set, and computational results may be refined by optionally iterating the computational procedure. To deduce features of a new material of interest, a computed or measured input property of the material is provided to an equation, correspondence, or algorithmic procedure previously deduced, and an output is obtained. In some instances, the output is iteratively refined. In some instances, new features deduced for the material of interest are added to a database of input and output data for known materials.
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.
2009-01-01
In current practice, it is often difficult to draw firm conclusions about turbulence model accuracy when performing multi-code CFD studies ostensibly using the same model because of inconsistencies in model formulation or implementation in different codes. This paper describes an effort to improve the consistency, verification, and validation of turbulence models within the aerospace community through a website database of verification and validation cases. Some of the variants of two widely-used turbulence models are described, and two independent computer codes (one structured and one unstructured) are used in conjunction with two specific versions of these models to demonstrate consistency with grid refinement for several representative problems. Naming conventions, implementation consistency, and thorough grid resolution studies are key factors necessary for success.
Sen Gupta, Anirban
2016-01-01
Packaging of drug molecules within microparticles and nanoparticles has become an important strategy in intravascular drug delivery, where the particles are designed to protect the drugs from plasma effects, increase drug residence time in circulation, and often facilitate drug delivery specifically at disease sites. To this end, over the past few decades, interdisciplinary research has focused on developing biocompatible materials for particle fabrication, technologies for particle manufacture, drug formulation within the particles for efficient loading, and controlled release and refinement of particle surface chemistries to render selectivity toward disease site for site-selective action. Majority of the particle systems developed for such purposes are spherical nano and microparticles and they have had low-to-moderate success in clinical translation. To refine the design of delivery systems for enhanced performance, in recent years, researchers have started focusing on the physicomechanical aspects of carrier particles, especially their shape, size, and stiffness, as new design parameters. Recent computational modeling studies, as well as, experimental studies using microfluidic devices are indicating that these design parameters greatly influence the particles' behavior in hemodynamic circulation, as well as cell-particle interactions for targeted payload delivery. Certain cellular components of circulation are also providing interesting natural cues for refining the design of drug carrier systems. Based on such findings, new benefits and challenges are being realized for the next generation of drug carriers. The current article will provide a comprehensive review of these findings and discuss the emerging design paradigm of incorporating physicomechanical components in fabrication of particulate drug delivery systems. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Boo, G.; Fabrikant, S. I.; Leyk, S.
2015-08-01
In spatial epidemiology, disease incidence and demographic data are commonly summarized within larger regions such as administrative units because of privacy concerns. As a consequence, analyses using these aggregated data are subject to the Modifiable Areal Unit Problem (MAUP) as the geographical manifestation of ecological fallacy. In this study, we create small area disease estimates through dasymetric refinement, and investigate the effects on predictive epidemiological models. We perform a binary dasymetric refinement of municipality-aggregated dog tumor incidence counts in Switzerland for the year 2008 using residential land as a limiting ancillary variable. This refinement is expected to improve the quality of spatial data originally aggregated within arbitrary administrative units by deconstructing them into discontinuous subregions that better reflect the underlying population distribution. To shed light on effects of this refinement, we compare a predictive statistical model that uses unrefined administrative units with one that uses dasymetrically refined spatial units. Model diagnostics and spatial distributions of model residuals are assessed to evaluate the model performances in different regions. In particular, we explore changes in the spatial autocorrelation of the model residuals due to spatial refinement of the enumeration units in a selected mountainous region, where the rugged topography induces great shifts of the analytical units i.e., residential land. Such spatial data quality refinement results in a more realistic estimation of the population distribution within administrative units, and thus, in a more accurate modeling of dog tumor incidence patterns. Our results emphasize the benefits of implementing a dasymetric modeling framework in veterinary spatial epidemiology.
Dragomir-Daescu, Dan; Buijs, Jorn Op Den; McEligot, Sean; Dai, Yifei; Entwistle, Rachel C.; Salas, Christina; Melton, L. Joseph; Bennet, Kevin E.; Khosla, Sundeep; Amin, Shreyasee
2013-01-01
Clinical implementation of quantitative computed tomography-based finite element analysis (QCT/FEA) of proximal femur stiffness and strength to assess the likelihood of proximal femur (hip) fractures requires a unified modeling procedure, consistency in predicting bone mechanical properties, and validation with realistic test data that represent typical hip fractures, specifically, a sideways fall on the hip. We, therefore, used two sets (n = 9, each) of cadaveric femora with bone densities varying from normal to osteoporotic to build, refine, and validate a new class of QCT/FEA models for hip fracture under loading conditions that simulate a sideways fall on the hip. Convergence requirements of finite element models of the first set of femora led to the creation of a new meshing strategy and a robust process to model proximal femur geometry and material properties from QCT images. We used a second set of femora to cross-validate the model parameters derived from the first set. Refined models were validated experimentally by fracturing femora using specially designed fixtures, load cells, and high speed video capture. CT image reconstructions of fractured femora were created to classify the fractures. The predicted stiffness (cross-validation R2 = 0.87), fracture load (cross-validation R2 = 0.85), and fracture patterns (83% agreement) correlated well with experimental data. PMID:21052839
Detached Eddy Simulation of the UH-60 Rotor Wake Using Adaptive Mesh Refinement
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.; Ahmad, Jasim U.
2012-01-01
Time-dependent Navier-Stokes flow simulations have been carried out for a UH-60 rotor with simplified hub in forward flight and hover flight conditions. Flexible rotor blades and flight trim conditions are modeled and established by loosely coupling the OVERFLOW Computational Fluid Dynamics (CFD) code with the CAMRAD II helicopter comprehensive code. High order spatial differences, Adaptive Mesh Refinement (AMR), and Detached Eddy Simulation (DES) are used to obtain highly resolved vortex wakes, where the largest turbulent structures are captured. Special attention is directed towards ensuring the dual time accuracy is within the asymptotic range, and verifying the loose coupling convergence process using AMR. The AMR/DES simulation produced vortical worms for forward flight and hover conditions, similar to previous results obtained for the TRAM rotor in hover. AMR proved to be an efficient means to capture a rotor wake without a priori knowledge of the wake shape.
Lipkin, W. Ian
2010-01-01
Summary: Platforms for pathogen discovery have improved since the days of Koch and Pasteur; nonetheless, the challenges of proving causation are at least as daunting as they were in the late 1800s. Although we will almost certainly continue to accumulate low-hanging fruit, where simple relationships will be found between the presence of a cultivatable agent and a disease, these successes will be increasingly infrequent. The future of the field rests instead in our ability to follow footprints of infectious agents that cannot be characterized using classical microbiological techniques and to develop the laboratory and computational infrastructure required to dissect complex host-microbe interactions. I have tried to refine the criteria used by Koch and successors to prove linkage to disease. These refinements are working constructs that will continue to evolve in light of new technologies, new models, and new insights. What will endure is the excitement of the chase. Happy hunting! PMID:20805403
Luginbühl, P; Güntert, P; Billeter, M; Wüthrich, K
1996-09-01
A new program for molecular dynamics (MD) simulation and energy refinement of biological macromolecules, OPAL, is introduced. Combined with the supporting program TRAJEC for the analysis of MD trajectories, OPAL affords high efficiency and flexibility for work with different force fields, and offers a user-friendly interface and extensive trajectory analysis capabilities. Salient features are computational speeds of up to 1.5 GFlops on vector supercomputers such as the NEC SX-3, ellipsoidal boundaries to reduce the system size for studies in explicit solvents, and natural treatment of the hydrostatic pressure. Practical applications of OPAL are illustrated with MD simulations of pure water, energy minimization of the NMR structure of the mixed disulfide of a mutant E. coli glutaredoxin with glutathione in different solvent models, and MD simulations of a small protein, pheromone Er-2, using either instantaneous or time-averaged NMR restraints, or no restraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, R W; Pember, R B; Elliott, N S
2001-10-22
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditionalmore » AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.« less
NASA Astrophysics Data System (ADS)
Zaichik, Leonid I.; Alipchenkov, Vladimir M.
2007-11-01
The purposes of the paper are threefold: (i) to refine the statistical model of preferential particle concentration in isotropic turbulence that was previously proposed by Zaichik and Alipchenkov [Phys. Fluids 15, 1776 (2003)], (ii) to investigate the effect of clustering of low-inertia particles using the refined model, and (iii) to advance a simple model for predicting the collision rate of aerosol particles. The model developed is based on a kinetic equation for the two-point probability density function of the relative velocity distribution of particle pairs. Improvements in predicting the preferential concentration of low-inertia particles are attained due to refining the description of the turbulent velocity field of the carrier fluid by including a difference between the time scales of the of strain and rotation rate correlations. The refined model results in a better agreement with direct numerical simulations for aerosol particles.
THE NASA AMES POLYCYCLIC AROMATIC HYDROCARBON INFRARED SPECTROSCOPIC DATABASE: THE COMPUTED SPECTRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauschlicher, C. W.; Ricca, A.; Boersma, C.
The astronomical emission features, formerly known as the unidentified infrared bands, are now commonly ascribed to polycyclic aromatic hydrocarbons (PAHs). The laboratory experiments and computational modeling done at the NASA Ames Research Center to create a collection of PAH IR spectra relevant to test and refine the PAH hypothesis have been assembled into a spectroscopic database. This database now contains over 800 PAH spectra spanning 2-2000 {mu}m (5000-5 cm{sup -1}). These data are now available on the World Wide Web at www.astrochem.org/pahdb. This paper presents an overview of the computational spectra in the database and the tools developed to analyzemore » and interpret astronomical spectra using the database. A description of the online and offline user tools available on the Web site is also presented.« less
Triangular node for Transmission-Line Modeling (TLM) applied to bio-heat transfer.
Milan, Hugo F M; Gebremedhin, Kifle G
2016-12-01
Transmission-Line Modeling (TLM) is a numerical method used to solve complex and time-domain bio-heat transfer problems. In TLM, rectangles are used to discretize two-dimensional problems. The drawback in using rectangular shapes is that instead of refining only the domain of interest, a large additional domain will also be refined in the x and y axes, which results in increased computational time and memory space. In this paper, we developed a triangular node for TLM applied to bio-heat transfer that does not have the drawback associated with the rectangular nodes. The model includes heat source, blood perfusion (advection), boundary conditions and initial conditions. The boundary conditions could be adiabatic, temperature, heat flux, or convection. A matrix equation for TLM, which simplifies the solution of time-domain problems or solves steady-state problems, was also developed. The predicted results were compared against results obtained from the solution of a simplified two-dimensional problem, and they agreed within 1% for a mesh length of triangular faces of 59µm±9µm (mean±standard deviation) and a time step of 1ms. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rybakin, B.; Bogatencov, P.; Secrieru, G.; Iliuha, N.
2013-10-01
The paper deals with a parallel algorithm for calculations on multiprocessor computers and GPU accelerators. The calculations of shock waves interaction with low-density bubble results and the problem of the gas flow with the forces of gravity are presented. This algorithm combines a possibility to capture a high resolution of shock waves, the second-order accuracy for TVD schemes, and a possibility to observe a low-level diffusion of the advection scheme. Many complex problems of continuum mechanics are numerically solved on structured or unstructured grids. To improve the accuracy of the calculations is necessary to choose a sufficiently small grid (with a small cell size). This leads to the drawback of a substantial increase of computation time. Therefore, for the calculations of complex problems it is reasonable to use the method of Adaptive Mesh Refinement. That is, the grid refinement is performed only in the areas of interest of the structure, where, e.g., the shock waves are generated, or a complex geometry or other such features exist. Thus, the computing time is greatly reduced. In addition, the execution of the application on the resulting sequence of nested, decreasing nets can be parallelized. Proposed algorithm is based on the AMR method. Utilization of AMR method can significantly improve the resolution of the difference grid in areas of high interest, and from other side to accelerate the processes of the multi-dimensional problems calculating. Parallel algorithms of the analyzed difference models realized for the purpose of calculations on graphic processors using the CUDA technology [1].
Terwilliger, Thomas C.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Moriarty, Nigel W.; Zwart, Peter H.; Hung, Li-Wei; Read, Randy J.; Adams, Paul D.
2008-01-01
The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 Å, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution. PMID:18094468
Possible quantum algorithm for the Lipshitz-Sarkar-Steenrod square for Khovanov homology
NASA Astrophysics Data System (ADS)
Ospina, Juan
2013-05-01
Recently the celebrated Khovanov Homology was introduced as a target for Topological Quantum Computation given that the Khovanov Homology provides a generalization of the Jones polynomal and then it is possible to think about of a generalization of the Aharonov.-Jones-Landau algorithm. Recently, Lipshitz and Sarkar introduced a space-level refinement of Khovanov homology. which is called Khovanov Homotopy. This refinement induces a Steenrod square operation Sq2 on Khovanov homology which they describe explicitly and then some computations of Sq2 were presented. Particularly, examples of links with identical integral Khovanov homology but with distinct Khovanov homotopy types were showed. In the presente work we will introduce possible quantum algorithms for the Lipshitz- Sarkar-Steenrod square for Khovanov Homolog and their possible simulations using computer algebra.
The report of the Gravity Field Workshop
NASA Astrophysics Data System (ADS)
Smith, D. E.
1982-04-01
A Gravity Field Workshop was convened to review the actions which could be taken prior to a GRAVSAT mission to improve the Earth's gravity field model. This review focused on the potential improvements in the Earth's gravity field which could be obtained using the current satellite and surface gravity data base. In particular, actions to improve the quality of the gravity field determination through refined measurement corrections, selected data augmentation and a more accurate reprocessing of the data were considered. In addition, recommendations were formulated which define actions which NASA should take to develop the necessary theoretical and computation techniques for gravity model determination and to use these approaches to improve the accuracy of the Earth's gravity model.
Computer automation for feedback system design
NASA Technical Reports Server (NTRS)
1975-01-01
Mathematical techniques and explanations of various steps used by an automated computer program to design feedback systems are summarized. Special attention was given to refining the automatic evaluation suboptimal loop transmission and the translation of time to frequency domain specifications.
What Does Research on Computer-Based Instruction Have to Say to the Reading Teacher?
ERIC Educational Resources Information Center
Balajthy, Ernest
1987-01-01
Examines questions typically asked about the effectiveness of computer-based reading instruction, suggesting that these questions must be refined to provide meaningful insight into the issues involved. Describes several critical problems with existing research and presents overviews of research on the effects of computer-based instruction on…
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1994-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: a gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: A gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment and other accepted computational results for a series of low and moderate Reynolds number flows.
Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
1999-01-01
The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.
Methods for evaluating the predictive accuracy of structural dynamic models
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, Jon D.
1990-01-01
Uncertainty of frequency response using the fuzzy set method and on-orbit response prediction using laboratory test data to refine an analytical model are emphasized with respect to large space structures. Two aspects of the fuzzy set approach were investigated relative to its application to large structural dynamics problems: (1) minimizing the number of parameters involved in computing possible intervals; and (2) the treatment of extrema which may occur in the parameter space enclosed by all possible combinations of the important parameters of the model. Extensive printer graphics were added to the SSID code to help facilitate model verification, and an application of this code to the LaRC Ten Bay Truss is included in the appendix to illustrate this graphics capability.
Source term evaluation for combustion modeling
NASA Technical Reports Server (NTRS)
Sussman, Myles A.
1993-01-01
A modification is developed for application to the source terms used in combustion modeling. The modification accounts for the error of the finite difference scheme in regions where chain-branching chemical reactions produce exponential growth of species densities. The modification is first applied to a one-dimensional scalar model problem. It is then generalized to multiple chemical species, and used in quasi-one-dimensional computations of shock-induced combustion in a channel. Grid refinement studies demonstrate the improved accuracy of the method using this modification. The algorithm is applied in two spatial dimensions and used in simulations of steady and unsteady shock-induced combustion. Comparisons with ballistic range experiments give confidence in the numerical technique and the 9-species hydrogen-air chemistry model.
Reduced Fragment Diversity for Alpha and Alpha-Beta Protein Structure Prediction using Rosetta.
Abbass, Jad; Nebel, Jean-Christophe
2017-01-01
Protein structure prediction is considered a main challenge in computational biology. The biannual international competition, Critical Assessment of protein Structure Prediction (CASP), has shown in its eleventh experiment that free modelling target predictions are still beyond reliable accuracy, therefore, much effort should be made to improve ab initio methods. Arguably, Rosetta is considered as the most competitive method when it comes to targets with no homologues. Relying on fragments of length 9 and 3 from known structures, Rosetta creates putative structures by assembling candidate fragments. Generally, the structure with the lowest energy score, also known as first model, is chosen to be the "predicted one". A thorough study has been conducted on the role and diversity of 3-mers involved in Rosetta's model "refinement" phase. Usage of the standard number of 3-mers - i.e. 200 - has been shown to degrade alpha and alpha-beta protein conformations initially achieved by assembling 9-mers. Therefore, a new prediction pipeline is proposed for Rosetta where the "refinement" phase is customised according to a target's structural class prediction. Over 8% improvement in terms of first model structure accuracy is reported for alpha and alpha-beta classes when decreasing the number of 3- mers. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Tetrahedral node for Transmission-Line Modeling (TLM) applied to Bio-heat Transfer.
Milan, Hugo F M; Gebremedhin, Kifle G
2016-12-01
Transmission-Line Modeling (TLM) is a numerical method used to solve complex and time-domain bio-heat transfer problems. In TLM, parallelepipeds are used to discretize three-dimensional problems. The drawback in using parallelepiped shapes is that instead of refining only the domain of interest, a large additional domain would also have to be refined, which results in increased computational time and memory space. In this paper, we developed a tetrahedral node for TLM applied to bio-heat transfer that does not have the drawback associated with the parallelepiped node. The model includes heat source, blood perfusion, boundary conditions and initial conditions. The boundary conditions could be adiabatic, temperature, heat flux, or convection. The predicted temperature and heat flux were compared against results from an analytical solution and the results agreed within 2% for a mesh size of 69,941 nodes and a time step of 5ms. The method was further validated against published results of maximum skin-surface temperature difference in a breast with and without tumor and the results agreed within 6%. The published results were obtained from a model that used parallelepiped TLM node. An open source software, TLMBHT, was written using the theory developed herein and is available for download free-of-charge. Copyright © 2016 Elsevier Ltd. All rights reserved.
Welberry, T R; Goossens, D J; Edwards, A J; David, W I
2001-01-01
A recently developed method for fitting a Monte Carlo computer-simulation model to observed single-crystal diffuse X-ray scattering has been used to study the diffuse scattering in benzil, diphenylethanedione, C(6)H(5)-CO-CO-C(6)H(5). A model involving 13 parameters consisting of 11 intermolecular force constants, a single intramolecular torsional force constant and a local Debye-Waller factor was refined to give an agreement factor, R = [summation operator omega(Delta I)(2)/summation operator omega I(obs)(2)](1/2), of 14.5% for 101,324 data points. The model was purely thermal in nature. The analysis has shown that the diffuse lines, which feature so prominently in the observed diffraction patterns, are due to strong longitudinal displacement correlations. These are transmitted from molecule to molecule via a network of contacts involving hydrogen bonding of an O atom on one molecule and the para H atom of the phenyl ring of a neighbouring molecule. The analysis also allowed the determination of a torsional force constant for rotations about the single bonds in the molecule. This is the first diffuse scattering study in which measurement of such internal molecular torsion forces has been attempted.
Solution-Adaptive Cartesian Cell Approach for Viscous and Inviscid Flows
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1996-01-01
A Cartesian cell-based approach for adaptively refined solutions of the Euler and Navier-Stokes equations in two dimensions is presented. Grids about geometrically complicated bodies are generated automatically, by the recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal cut cells are created using modified polygon-clipping algorithms. The grid is stored in a binary tree data structure that provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite volume formulation. The convective terms are upwinded: A linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The results of a study comparing the accuracy and positivity of two classes of cell-centered, viscous gradient reconstruction procedures is briefly summarized. Adaptively refined solutions of the Navier-Stokes equations are shown using the more robust of these gradient reconstruction procedures, where the results computed by the Cartesian approach are compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.
NASA Technical Reports Server (NTRS)
Gallardo, V. C.; Storace, A. S.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.
1981-01-01
The component element method was used to develop a transient dynamic analysis computer program which is essentially based on modal synthesis combined with a central, finite difference, numerical integration scheme. The methodology leads to a modular or building-block technique that is amenable to computer programming. To verify the analytical method, turbine engine transient response analysis (TETRA), was applied to two blade-out test vehicles that had been previously instrumented and tested. Comparison of the time dependent test data with those predicted by TETRA led to recommendations for refinement or extension of the analytical method to improve its accuracy and overcome its shortcomings. The development of working equations, their discretization, numerical solution scheme, the modular concept of engine modelling, the program logical structure and some illustrated results are discussed. The blade-loss test vehicles (rig full engine), the type of measured data, and the engine structural model are described.
Multiple-body simulation with emphasis on integrated Space Shuttle vehicle
NASA Technical Reports Server (NTRS)
Chiu, Ing-Tsau
1993-01-01
The program to obtain intergrid communications - Pegasus - was enhanced to make better use of computing resources. Periodic block tridiagonal and penta-diagonal diagonal routines in OVERFLOW were modified to use a better algorithm to speed up the calculation for grids with periodic boundary conditions. Several programs were added to collar grid tools and a user friendly shell script was developed to help users generate collar grids. User interface for HYPGEN was modified to cope with the changes in HYPGEN. ET/SRB attach hardware grids were added to the computational model for the space shuttle and is currently incorporated into the refined shuttle model jointly developed at Johnson Space Center and Ames Research Center. Flow simulation for the integrated space shuttle vehicle at flight Reynolds number was carried out and compared with flight data as well as the earlier simulation for wind tunnel Reynolds number.
Meso-scale framework for modeling granular material using computed tomography
Turner, Anne K.; Kim, Felix H.; Penumadu, Dayakar; ...
2016-03-17
Numerical modeling of unconsolidated granular materials is comprised of multiple nonlinear phenomena. Accurately capturing these phenomena, including grain deformation and intergranular forces depends on resolving contact regions several orders of magnitude smaller than the grain size. Here, we investigate a method for capturing the morphology of the individual particles using computed X-ray and neutron tomography, which allows for accurate characterization of the interaction between grains. The ability of these numerical approaches to determine stress concentrations at grain contacts is important in order to capture catastrophic splitting of individual grains, which has been shown to play a key role in themore » plastic behavior of the granular material on the continuum level. Discretization approaches, including mesh refinement and finite element type selection are presented to capture high stress concentrations at contact points between grains. The effect of a grain’s coordination number on the stress concentrations is also investigated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zi-Kui; Gleeson, Brian; Shang, Shunli
This project developed computational tools that can complement and support experimental efforts in order to enable discovery and more efficient development of Ni-base structural materials and coatings. The project goal was reached through an integrated computation-predictive and experimental-validation approach, including first-principles calculations, thermodynamic CALPHAD (CALculation of PHAse Diagram), and experimental investigations on compositions relevant to Ni-base superalloys and coatings in terms of oxide layer growth and microstructure stabilities. The developed description included composition ranges typical for coating alloys and, hence, allow for prediction of thermodynamic properties for these material systems. The calculation of phase compositions, phase fraction, and phase stabilities,more » which are directly related to properties such as ductility and strength, was a valuable contribution, along with the collection of computational tools that are required to meet the increasing demands for strong, ductile and environmentally-protective coatings. Specifically, a suitable thermodynamic description for the Ni-Al-Cr-Co-Si-Hf-Y system was developed for bulk alloy and coating compositions. Experiments were performed to validate and refine the thermodynamics from the CALPHAD modeling approach. Additionally, alloys produced using predictions from the current computational models were studied in terms of their oxidation performance. Finally, results obtained from experiments aided in the development of a thermodynamic modeling automation tool called ESPEI/pycalphad - for more rapid discovery and development of new materials.« less
Forecasting runout of rock and debris avalanches
Iverson, Richard M.; Evans, S.G.; Mugnozza, G.S.; Strom, A.; Hermanns, R.L.
2006-01-01
Physically based mathematical models and statistically based empirical equations each may provide useful means of forecasting runout of rock and debris avalanches. This paper compares the foundations, strengths, and limitations of a physically based model and a statistically based forecasting method, both of which were developed to predict runout across three-dimensional topography. The chief advantage of the physically based model results from its ties to physical conservation laws and well-tested axioms of soil and rock mechanics, such as the Coulomb friction rule and effective-stress principle. The output of this model provides detailed information about the dynamics of avalanche runout, at the expense of high demands for accurate input data, numerical computation, and experimental testing. In comparison, the statistical method requires relatively modest computation and no input data except identification of prospective avalanche source areas and a range of postulated avalanche volumes. Like the physically based model, the statistical method yields maps of predicted runout, but it provides no information on runout dynamics. Although the two methods differ significantly in their structure and objectives, insights gained from one method can aid refinement of the other.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.
Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thusmore » represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. Furthermore, this flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models.« less
Refinements in the Los Alamos model of the prompt fission neutron spectrum
Madland, D. G.; Kahler, A. C.
2017-01-01
This paper presents a number of refinements to the original Los Alamos model of the prompt fission neutron spectrum and average prompt neutron multiplicity as derived in 1982. The four refinements are due to new measurements of the spectrum and related fission observables many of which were not available in 1982. Here, they are also due to a number of detailed studies and comparisons of the model with previous and present experimental results including not only the differential spectrum, but also integal cross sections measured in the field of the differential spectrum. The four refinements are (a) separate neutron contributionsmore » in binary fission, (b) departure from statistical equilibrium at scission, (c) fission-fragment nuclear level-density models, and (d) center-of-mass anisotropy. With these refinements, for the first time, good agreement has been obtained for both differential and integral measurements using the same Los Alamos model spectrum.« less
Conformational Heterogeneity of Unbound Proteins Enhances Recognition in Protein-Protein Encounters.
Pallara, Chiara; Rueda, Manuel; Abagyan, Ruben; Fernández-Recio, Juan
2016-07-12
To understand cellular processes at the molecular level we need to improve our knowledge of protein-protein interactions, from a structural, mechanistic, and energetic point of view. Current theoretical studies and computational docking simulations show that protein dynamics plays a key role in protein association and support the need for including protein flexibility in modeling protein interactions. Assuming the conformational selection binding mechanism, in which the unbound state can sample bound conformers, one possible strategy to include flexibility in docking predictions would be the use of conformational ensembles originated from unbound protein structures. Here we present an exhaustive computational study about the use of precomputed unbound ensembles in the context of protein docking, performed on a set of 124 cases of the Protein-Protein Docking Benchmark 3.0. Conformational ensembles were generated by conformational optimization and refinement with MODELLER and by short molecular dynamics trajectories with AMBER. We identified those conformers providing optimal binding and investigated the role of protein conformational heterogeneity in protein-protein recognition. Our results show that a restricted conformational refinement can generate conformers with better binding properties and improve docking encounters in medium-flexible cases. For more flexible cases, a more extended conformational sampling based on Normal Mode Analysis was proven helpful. We found that successful conformers provide better energetic complementarity to the docking partners, which is compatible with recent views of binding association. In addition to the mechanistic considerations, these findings could be exploited for practical docking predictions of improved efficiency.
Structure refinement of membrane proteins via molecular dynamics simulations.
Dutagaci, Bercem; Heo, Lim; Feig, Michael
2018-07-01
A refinement protocol based on physics-based techniques established for water soluble proteins is tested for membrane protein structures. Initial structures were generated by homology modeling and sampled via molecular dynamics simulations in explicit lipid bilayer and aqueous solvent systems. Snapshots from the simulations were selected based on scoring with either knowledge-based or implicit membrane-based scoring functions and averaged to obtain refined models. The protocol resulted in consistent and significant refinement of the membrane protein structures similar to the performance of refinement methods for soluble proteins. Refinement success was similar between sampling in the presence of lipid bilayers and aqueous solvent but the presence of lipid bilayers may benefit the improvement of lipid-facing residues. Scoring with knowledge-based functions (DFIRE and RWplus) was found to be as good as scoring using implicit membrane-based scoring functions suggesting that differences in internal packing is more important than orientations relative to the membrane during the refinement of membrane protein homology models. © 2018 Wiley Periodicals, Inc.
Intelligent Systems: Terrestrial Observation and Prediction Using Remote Sensing Data
NASA Technical Reports Server (NTRS)
Coughlan, Joseph C.
2005-01-01
NASA has made science and technology investments to better utilize its large space-borne remote sensing data holdings of the Earth. With the launch of Terra, NASA created a data-rich environment where the challenge is to fully utilize the data collected from EOS however, despite unprecedented amounts of observed data, there is a need for increasing the frequency, resolution, and diversity of observations. Current terrestrial models that use remote sensing data were constructed in a relatively data and compute limited era and do not take full advantage of on-line learning methods and assimilation techniques that can exploit these data. NASA has invested in visualization, data mining and knowledge discovery methods which have facilitated data exploitation, but these methods are insufficient for improving Earth science models that have extensive background knowledge nor do these methods refine understanding of complex processes. Investing in interdisciplinary teams that include computational scientists can lead to new models and systems for online operation and analysis of data that can autonomously improve in prediction skill over time.
Influence of grid resolution, parcel size and drag models on bubbling fluidized bed simulation
Lu, Liqiang; Konan, Arthur; Benyahia, Sofiane
2017-06-02
Here in this paper, a bubbling fluidized bed is simulated with different numerical parameters, such as grid resolution and parcel size. We examined also the effect of using two homogeneous drag correlations and a heterogeneous drag based on the energy minimization method. A fast and reliable bubble detection algorithm was developed based on the connected component labeling. The radial and axial solids volume fraction profiles are compared with experiment data and previous simulation results. These results show a significant influence of drag models on bubble size and voidage distributions and a much less dependence on numerical parameters. With a heterogeneousmore » drag model that accounts for sub-scale structures, the void fraction in the bubbling fluidized bed can be well captured with coarse grid and large computation parcels. Refining the CFD grid and reducing the parcel size can improve the simulation results but with a large increase in computation cost.« less
NMR crystallography of α-poly(L-lactide).
Pawlak, Tomasz; Jaworska, Magdalena; Potrzebowski, Marek J
2013-03-07
A complementary approach that combines NMR measurements, analysis of X-ray and neutron powder diffraction data and advanced quantum mechanical calculations was employed to study the α-polymorph of L-polylactide. Such a strategy, which is known as NMR crystallography, to the best of our knowledge, is used here for the first time for the fine refinement of the crystal structure of a synthetic polymer. The GIPAW method was used to compute the NMR shielding parameters for the different models, which included the α-PLLA structure obtained by 2-dimensional wide-angle X-ray diffraction (WAXD) at -150 °C (model M1) and at 25 °C (model M2), neutron diffraction (WAND) measurements (model M3) and the fully optimized geometry of the PLLA chains in the unit cell with defined size (model M4). The influence of changes in the chain conformation on the (13)C σ(ii) NMR shielding parameters is shown. The correlation between the σ(ii) and δ(ii) values for the M1-M4 models revealed that the M4 model provided the best fit. Moreover, a comparison of the experimental (13)C NMR spectra with the spectra calculated using the M1-M4 models strongly supports the data for the M4 model. The GIPAW method, via verification using NMR measurements, was shown to be capable of the fine refinement of the crystal structures of polymers when coarse X-ray diffraction data for powdered samples are available.
Automatic computation of 2D cardiac measurements from B-mode echocardiography
NASA Astrophysics Data System (ADS)
Park, JinHyeong; Feng, Shaolei; Zhou, S. Kevin
2012-03-01
We propose a robust and fully automatic algorithm which computes the 2D echocardiography measurements recommended by America Society of Echocardiography. The algorithm employs knowledge-based imaging technologies which can learn the expert's knowledge from the training images and expert's annotation. Based on the models constructed from the learning stage, the algorithm searches initial location of the landmark points for the measurements by utilizing heart structure of left ventricle including mitral valve aortic valve. It employs the pseudo anatomic M-mode image generated by accumulating the line images in 2D parasternal long axis view along the time to refine the measurement landmark points. The experiment results with large volume of data show that the algorithm runs fast and is robust comparable to expert.
The binding domain of the HMGB1 inhibitor carbenoxolone: Theory and experiment
NASA Astrophysics Data System (ADS)
Mollica, Luca; Curioni, Alessandro; Andreoni, Wanda; Bianchi, Marco E.; Musco, Giovanna
2008-05-01
We present a combined computational and experimental study of the interaction of the Box A of the HMGB1 protein and carbenoxolone, an inhibitor of its pro-inflammatory activity. The computational approach consists of classical molecular dynamics (MD) simulations based on the GROMOS force field with quantum-refined (QRFF) atomic charges for the ligand. Experimental data consist of fluorescence intensities, chemical shift displacements, saturation transfer differences and intermolecular Nuclear Overhauser Enhancement signals. Good agreement is found between observations and the conformation of the ligand-protein complex resulting from QRFF-MD. In contrast, simple docking procedures and MD based on the unrefined force field provide models inconsistent with experiment. The ligand-protein binding is dominated by non-directional interactions.
The dynamics of plate tectonics and mantle flow: from local to global scales.
Stadler, Georg; Gurnis, Michael; Burstedde, Carsten; Wilcox, Lucas C; Alisic, Laura; Ghattas, Omar
2010-08-27
Plate tectonics is regulated by driving and resisting forces concentrated at plate boundaries, but observationally constrained high-resolution models of global mantle flow remain a computational challenge. We capitalized on advances in adaptive mesh refinement algorithms on parallel computers to simulate global mantle flow by incorporating plate motions, with individual plate margins resolved down to a scale of 1 kilometer. Back-arc extension and slab rollback are emergent consequences of slab descent in the upper mantle. Cold thermal anomalies within the lower mantle couple into oceanic plates through narrow high-viscosity slabs, altering the velocity of oceanic plates. Viscous dissipation within the bending lithosphere at trenches amounts to approximately 5 to 20% of the total dissipation through the entire lithosphere and mantle.
Testing model for prediction system of 1-AU arrival times of CME-associated interplanetary shocks
NASA Astrophysics Data System (ADS)
Ogawa, Tomoya; den, Mitsue; Tanaka, Takashi; Sugihara, Kohta; Takei, Toshifumi; Amo, Hiroyoshi; Watari, Shinichi
We test a model to predict arrival times of interplanetary shock waves associated with coronal mass ejections (CMEs) using a three-dimensional adaptive mesh refinement (AMR) code. The model is used for the prediction system we develop, which has a Web-based user interface and aims at people who is not familiar with operation of computers and numerical simulations or is not researcher. We apply the model to interplanetary CME events. We first choose coronal parameters so that property of background solar wind observed by ACE space craft is reproduced. Then we input CME parameters observed by SOHO/LASCO. Finally we compare the predicted arrival times with observed ones. We describe results of the test and discuss tendency of the model.
Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models
Carlberg, Kevin T.
2014-11-05
Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less
Lee, Shu Jin; Lee, Heow Pueh; Tse, Kwong Ming; Cheong, Ee Cherk; Lim, Siak Piang
2012-06-01
Complex 3-D defects of the facial skeleton are difficult to reconstruct with freehand carving of autogenous bone grafts. Onlay bone grafts are hard to carve and are associated with imprecise graft-bone interface contact and bony resorption. Autologous cartilage is well established in ear reconstruction as it is easy to carve and is associated with minimal resorption. In the present study, we aimed to reconstruct the hypoplastic orbitozygomatic region in a patient with left hemifacial microsomia using computer-aided design and rapid prototyping to facilitate costal cartilage carving and grafting. A three-step process of (1) 3-D reconstruction of the computed tomographic image, (2) mirroring the facial skeleton, and (3) modeling and rapid prototyping of the left orbitozygomaticomalar region and reconstruction template was performed. The template aided in donor site selection and extracorporeal contouring of the rib cartilage graft to allow for an accurate fit of the graft to the bony model prior to final fixation in the patient. We are able to refine the existing computer-aided design and rapid prototyping methods to allow for extracorporeal contouring of grafts and present rib cartilage as a good alternative to bone for autologous reconstruction.
Using adaptive grid in modeling rocket nozzle flow
NASA Technical Reports Server (NTRS)
Chow, Alan S.; Jin, Kang-Ren
1992-01-01
The mechanical behavior of a rocket motor internal flow field results in a system of nonlinear partial differential equations which cannot be solved analytically. However, this system of equations called the Navier-Stokes equations can be solved numerically. The accuracy and the convergence of the solution of the system of equations will depend largely on how precisely the sharp gradients in the domain of interest can be resolved. With the advances in computer technology, more sophisticated algorithms are available to improve the accuracy and convergence of the solutions. An adaptive grid generation is one of the schemes which can be incorporated into the algorithm to enhance the capability of numerical modeling. It is equivalent to putting intelligence into the algorithm to optimize the use of computer memory. With this scheme, the finite difference domain of the flow field called the grid does neither have to be very fine nor strategically placed at the location of sharp gradients. The grid is self adapting as the solution evolves. This scheme significantly improves the methodology of solving flow problems in rocket nozzles by taking the refinement part of grid generation out of the hands of computational fluid dynamics (CFD) specialists and place it into the computer algorithm itself.
Benson, Nicholas F; Kranzler, John H; Floyd, Randy G
2016-10-01
Prior research examining cognitive ability and academic achievement relations have been based on different theoretical models, have employed both latent variables as well as observed variables, and have used a variety of analytic methods. Not surprisingly, results have been inconsistent across studies. The aims of this study were to (a) examine how relations between psychometric g, Cattell-Horn-Carroll (CHC) broad abilities, and academic achievement differ across higher-order and bifactor models; (b) examine how well various types of observed scores corresponded with latent variables; and (c) compare two types of observed scores (i.e., refined and non-refined factor scores) as predictors of academic achievement. Results suggest that cognitive-achievement relations vary across theoretical models and that both types of factor scores tend to correspond well with the models on which they are based. However, orthogonal refined factor scores (derived from a bifactor model) have the advantage of controlling for multicollinearity arising from the measurement of psychometric g across all measures of cognitive abilities. Results indicate that the refined factor scores provide more precise representations of their targeted constructs than non-refined factor scores and maintain close correspondence with the cognitive-achievement relations observed for latent variables. Thus, we argue that orthogonal refined factor scores provide more accurate representations of the relations between CHC broad abilities and achievement outcomes than non-refined scores do. Further, the use of refined factor scores addresses calls for the application of scores based on latent variable models. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Wing Shape Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2015-01-01
A new two-step theory is investigated for predicting the deflection and slope of an entire structure using strain measurements at discrete locations. In the first step, a measured strain is fitted using a piecewise least-squares curve fitting method together with the cubic spline technique. These fitted strains are integrated twice to obtain deflection data along the fibers. In the second step, computed deflection along the fibers are combined with a finite element model of the structure in order to interpolate and extrapolate the deflection and slope of the entire structure through the use of the System Equivalent Reduction and Expansion Process. The theory is first validated on a computational model, a cantilevered rectangular plate wing. The theory is then applied to test data from a cantilevered swept-plate wing model. Computed results are compared with finite element results, results using another strain-based method, and photogrammetry data. For the computational model under an aeroelastic load, maximum deflection errors in the fore and aft, lateral, and vertical directions are -3.2 percent, 0.28 percent, and 0.09 percent, respectively; and maximum slope errors in roll and pitch directions are 0.28 percent and -3.2 percent, respectively. For the experimental model, deflection results at the tip are shown to be accurate to within 3.8 percent of the photogrammetry data and are accurate to within 2.2 percent in most cases. In general, excellent matching between target and computed values are accomplished in this study. Future refinement of this theory will allow it to monitor the deflection and health of an entire aircraft in real time, allowing for aerodynamic load computation, active flexible motion control, and active induced drag reduction..
Wing Shape Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-gi
2015-01-01
A new two-step theory is investigated for predicting the deflection and slope of an entire structure using strain measurements at discrete locations. In the first step, a measured strain is fitted using a piecewise least-squares curve fitting method together with the cubic spline technique. These fitted strains are integrated twice to obtain deflection data along the fibers. In the second step, computed deflection along the fibers are combined with a finite element model of the structure in order to interpolate and extrapolate the deflection and slope of the entire structure through the use of the System Equivalent Reduction and Expansion Process. The theory is first validated on a computational model, a cantilevered rectangular plate wing. The theory is then applied to test data from a cantilevered swept-plate wing model. Computed results are compared with finite element results, results using another strainbased method, and photogrammetry data. For the computational model under an aeroelastic load, maximum deflection errors in the fore and aft, lateral, and vertical directions are -3.2%, 0.28%, and 0.09%, respectively; and maximum slope errors in roll and pitch directions are 0.28% and -3.2%, respectively. For the experimental model, deflection results at the tip are shown to be accurate to within 3.8% of the photogrammetry data and are accurate to within 2.2% in most cases. In general, excellent matching between target and computed values are accomplished in this study. Future refinement of this theory will allow it to monitor the deflection and health of an entire aircraft in real time, allowing for aerodynamic load computation, active flexible motion control, and active induced drag reduction.
Development of Adaptive Model Refinement (AMoR) for Multiphysics and Multifidelity Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turinsky, Paul
This project investigated the development and utilization of Adaptive Model Refinement (AMoR) for nuclear systems simulation applications. AMoR refers to utilization of several models of physical phenomena which differ in prediction fidelity. If the highest fidelity model is judged to always provide or exceeded the desired fidelity, than if one can determine the difference in a Quantity of Interest (QoI) between the highest fidelity model and lower fidelity models, one could utilize the fidelity model that would just provide the magnitude of the QoI desired. Assuming lower fidelity models require less computational resources, in this manner computational efficiency can bemore » realized provided the QoI value can be accurately and efficiently evaluated. This work utilized Generalized Perturbation Theory (GPT) to evaluate the QoI, by convoluting the GPT solution with the residual of the highest fidelity model determined using the solution from lower fidelity models. Specifically, a reactor core neutronics problem and thermal-hydraulics problem were studied to develop and utilize AMoR. The highest fidelity neutronics model was based upon the 3D space-time, two-group, nodal diffusion equations as solved in the NESTLE computer code. Added to the NESTLE code was the ability to determine the time-dependent GPT neutron flux. The lower fidelity neutronics model was based upon the point kinetics equations along with utilization of a prolongation operator to determine the 3D space-time, two-group flux. The highest fidelity thermal-hydraulics model was based upon the space-time equations governing fluid flow in a closed channel around a heat generating fuel rod. The Homogenous Equilibrium Mixture (HEM) model was used for the fluid and Finite Difference Method was applied to both the coolant and fuel pin energy conservation equations. The lower fidelity thermal-hydraulic model was based upon the same equations as used for the highest fidelity model but now with coarse spatial meshing, corrected somewhat by employing effective fuel heat conduction values. The effectiveness of switching between the highest fidelity model and lower fidelity model as a function of time was assessed using the neutronics problem. Based upon work completed to date, one concludes that the time switching is effective in annealing out differences between the highest and lower fidelity solutions. The effectiveness of using a lower fidelity GPT solution, along with a prolongation operator, to estimate the QoI was also assessed. The utilization of a lower fidelity GPT solution was done in an attempt to avoid the high computational burden associated with solving for the highest fidelity GPT solution. Based upon work completed to date, one concludes that the lower fidelity adjoint solution is not sufficiently accurate with regard to estimating the QoI; however, a formulation has been revealed that may provide a path for addressing this shortcoming.« less
Development of Non-Optimum Factors for Launch Vehicle Propellant Tank Bulkhead Weight Estimation
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey; Wallace, Matthew L.; Cerro, Jeffrey A.
2012-01-01
Non-optimum factors are used during aerospace conceptual and preliminary design to account for the increased weights of as-built structures due to future manufacturing and design details. Use of higher-fidelity non-optimum factors in these early stages of vehicle design can result in more accurate predictions of a concept s actual weights and performance. To help achieve this objective, non-optimum factors are calculated for the aluminum-alloy gores that compose the ogive and ellipsoidal bulkheads of the Space Shuttle Super-Lightweight Tank propellant tanks. Minimum values for actual gore skin thicknesses and weld land dimensions are extracted from selected production drawings, and are used to predict reference gore weights. These actual skin thicknesses are also compared to skin thicknesses predicted using classical structural mechanics and tank proof-test pressures. Both coarse and refined weights models are developed for the gores. The coarse model is based on the proof pressure-sized skin thicknesses, and the refined model uses the actual gore skin thicknesses and design detail dimensions. To determine the gore non-optimum factors, these reference weights are then compared to flight hardware weights reported in a mass properties database. When manufacturing tolerance weight estimates are taken into account, the gore non-optimum factors computed using the coarse weights model range from 1.28 to 2.76, with an average non-optimum factor of 1.90. Application of the refined weights model yields non-optimum factors between 1.00 and 1.50, with an average non-optimum factor of 1.14. To demonstrate their use, these calculated non-optimum factors are used to predict heavier, more realistic gore weights for a proposed heavy-lift launch vehicle s propellant tank bulkheads. These results indicate that relatively simple models can be developed to better estimate the actual weights of large structures for future launch vehicles.
GRACE gravity field recovery using refined acceleration approach
NASA Astrophysics Data System (ADS)
Li, Zhao; van Dam, Tonie; Weigelt, Matthias
2017-04-01
Since 2002, the GRACE mission has yielded monthly gravity field solutions with such a high level of quality that we have been able to observe so many changes to the Earth mass system. Based on GRACE L1B observations, a number of official monthly gravity field models have been developed and published using different methods, e.g. the CSR RL05, JPL RL05, and GFZ RL05 are being computed by a dynamic approach, the ITSG and Tongji GRACE are generated using what is known as the short-arc approach, the AIUB models are computed using celestial mechanics approach, and the DMT-1 model is calculated by means of an acceleration approach. Different from the DMT-1 model, which links the gravity field parameters directly to the bias-corrected range measurements at three adjacent epochs, in this work we present an alternative acceleration approach which connects range accelerations and velocity differences to the gradient of the gravitational potential. Due to the fact that GPS derived velocity difference is provided at a lower precision, we must reduce this approach to residual quantities using an a priori gravity field which allows us to subsequently neglect the residual velocity difference term. We find that this assumption would cause a problem in the low-degree gravity field coefficient, particularly for degree 2 and also from degree 16 to 26. To solve this problem, we present a new way of handling the residual velocity difference term, that is to treat this residual velocity difference term as unknown but estimable quantity, as it depends on the unknown residual gravity field parameters and initial conditions. In other word, we regard the kinematic orbit position vectors as pseudo observations, and the corrections of orbits are estimated together with both the geopotential coefficients and the accelerometer scale/bias by using a weighted least square adjustment. The new approach is therefore a refinement of the existing approach but offers a better approximation to reality. This result is especially important in view of the upcoming GRACE Follow-On mission, which will be equipped with a laser ranging instrument offering a higher precision. Our validation results show that this refined acceleration approach could produce monthly GRACE gravity solutions at the same level of precision as the other approaches.
Two-stage atlas subset selection in multi-atlas based image segmentation.
Zhao, Tingting; Ruan, Dan
2015-06-01
Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.
REFMAC5 for the refinement of macromolecular crystal structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murshudov, Garib N., E-mail: garib@ysbl.york.ac.uk; Skubák, Pavol; Lebedev, Andrey A.
The general principles behind the macromolecular crystal structure refinement program REFMAC5 are described. This paper describes various components of the macromolecular crystallographic refinement program REFMAC5, which is distributed as part of the CCP4 suite. REFMAC5 utilizes different likelihood functions depending on the diffraction data employed (amplitudes or intensities), the presence of twinning and the availability of SAD/SIRAS experimental diffraction data. To ensure chemical and structural integrity of the refined model, REFMAC5 offers several classes of restraints and choices of model parameterization. Reliable models at resolutions at least as low as 4 Å can be achieved thanks to low-resolution refinement toolsmore » such as secondary-structure restraints, restraints to known homologous structures, automatic global and local NCS restraints, ‘jelly-body’ restraints and the use of novel long-range restraints on atomic displacement parameters (ADPs) based on the Kullback–Leibler divergence. REFMAC5 additionally offers TLS parameterization and, when high-resolution data are available, fast refinement of anisotropic ADPs. Refinement in the presence of twinning is performed in a fully automated fashion. REFMAC5 is a flexible and highly optimized refinement package that is ideally suited for refinement across the entire resolution spectrum encountered in macromolecular crystallography.« less
A quality score for coronary artery tree extraction results
NASA Astrophysics Data System (ADS)
Cao, Qing; Broersen, Alexander; Kitslaar, Pieter H.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke
2018-02-01
Coronary artery trees (CATs) are often extracted to aid the fully automatic analysis of coronary artery disease on coronary computed tomography angiography (CCTA) images. Automatically extracted CATs often miss some arteries or include wrong extractions which require manual corrections before performing successive steps. For analyzing a large number of datasets, a manual quality check of the extraction results is time-consuming. This paper presents a method to automatically calculate quality scores for extracted CATs in terms of clinical significance of the extracted arteries and the completeness of the extracted CAT. Both right dominant (RD) and left dominant (LD) anatomical statistical models are generated and exploited in developing the quality score. To automatically determine which model should be used, a dominance type detection method is also designed. Experiments are performed on the automatically extracted and manually refined CATs from 42 datasets to evaluate the proposed quality score. In 39 (92.9%) cases, the proposed method is able to measure the quality of the manually refined CATs with higher scores than the automatically extracted CATs. In a 100-point scale system, the average scores for automatically and manually refined CATs are 82.0 (+/-15.8) and 88.9 (+/-5.4) respectively. The proposed quality score will assist the automatic processing of the CAT extractions for large cohorts which contain both RD and LD cases. To the best of our knowledge, this is the first time that a general quality score for an extracted CAT is presented.
Impact of scaffold rigidity on the design and evolution of an artificial Diels-Alderase
Preiswerk, Nathalie; Beck, Tobias; Schulz, Jessica D.; Milovník, Peter; Mayer, Clemens; Siegel, Justin B.; Baker, David; Hilvert, Donald
2014-01-01
By combining targeted mutagenesis, computational refinement, and directed evolution, a modestly active, computationally designed Diels-Alderase was converted into the most proficient biocatalyst for [4+2] cycloadditions known. The high stereoselectivity and minimal product inhibition of the evolved enzyme enabled preparative scale synthesis of a single product diastereomer. X-ray crystallography of the enzyme–product complex shows that the molecular changes introduced over the course of optimization, including addition of a lid structure, gradually reshaped the pocket for more effective substrate preorganization and transition state stabilization. The good overall agreement between the experimental structure and the original design model with respect to the orientations of both the bound product and the catalytic side chains contrasts with other computationally designed enzymes. Because design accuracy appears to correlate with scaffold rigidity, improved control over backbone conformation will likely be the key to future efforts to design more efficient enzymes for diverse chemical reactions. PMID:24847076
Lattice Boltzmann and Navier-Stokes Cartesian CFD Approaches for Airframe Noise Predictions
NASA Technical Reports Server (NTRS)
Barad, Michael F.; Kocheemoolayil, Joseph G.; Kiris, Cetin C.
2017-01-01
Lattice Boltzmann (LB) and compressible Navier-Stokes (NS) equations based computational fluid dynamics (CFD) approaches are compared for simulating airframe noise. Both LB and NS CFD approaches are implemented within the Launch Ascent and Vehicle Aerodynamics (LAVA) framework. Both schemes utilize the same underlying Cartesian structured mesh paradigm with provision for local adaptive grid refinement and sub-cycling in time. We choose a prototypical massively separated, wake-dominated flow ideally suited for Cartesian-grid based approaches in this study - The partially-dressed, cavity-closed nose landing gear (PDCC-NLG) noise problem from AIAA's Benchmark problems for Airframe Noise Computations (BANC) series of workshops. The relative accuracy and computational efficiency of the two approaches are systematically compared. Detailed comments are made on the potential held by LB to significantly reduce time-to-solution for a desired level of accuracy within the context of modeling airframes noise from first principles.
Using machine learning to accelerate sampling-based inversion
NASA Astrophysics Data System (ADS)
Valentine, A. P.; Sambridge, M.
2017-12-01
In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.
NASA Astrophysics Data System (ADS)
Moreno-Jiménez, Inés; Hulsart-Billstrom, Gry; Lanham, Stuart A.; Janeczek, Agnieszka A.; Kontouli, Nasia; Kanczler, Janos M.; Evans, Nicholas D.; Oreffo, Richard Oc
2016-08-01
Biomaterial development for tissue engineering applications is rapidly increasing but necessitates efficacy and safety testing prior to clinical application. Current in vitro and in vivo models hold a number of limitations, including expense, lack of correlation between animal models and human outcomes and the need to perform invasive procedures on animals; hence requiring new predictive screening methods. In the present study we tested the hypothesis that the chick embryo chorioallantoic membrane (CAM) can be used as a bioreactor to culture and study the regeneration of human living bone. We extracted bone cylinders from human femoral heads, simulated an injury using a drill-hole defect, and implanted the bone on CAM or in vitro control-culture. Micro-computed tomography (μCT) was used to quantify the magnitude and location of bone volume changes followed by histological analyses to assess bone repair. CAM blood vessels were observed to infiltrate the human bone cylinder and maintain human cell viability. Histological evaluation revealed extensive extracellular matrix deposition in proximity to endochondral condensations (Sox9+) on the CAM-implanted bone cylinders, correlating with a significant increase in bone volume by μCT analysis (p < 0.01). This human-avian system offers a simple refinement model for animal research and a step towards a humanized in vivo model for tissue engineering.
NASA Technical Reports Server (NTRS)
1973-01-01
A shuttle (ARS) atmosphere revitalization subsystem active thermal control subsystem (ATCS) performance routine was developed. This computer program is adapted from the Shuttle EC/LSS Design Computer Program. The program was upgraded in three noteworthy areas: (1) The functional ARS/ATCS schematic has been revised to accurately synthesize the shuttle baseline system definition. (2) The program logic has been improved to provide a more accurate prediction of the integrated ARS/ATCS system performance. Additionally, the logic has been expanded to model all components and thermal loads in the ARS/ATCS system. (3) The program is designed to be used on the NASA JSC crew system division's programmable calculator system. As written the new computer routine has an average running time of five minutes. The use of desk top type calculation equipment, and the rapid response of the program provides the NASA with an analytical tool for trade studies to refine the system definition, and for test support of the RSECS or integrated Shuttle ARS/ATCS test programs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef
Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesianmore » inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.« less
Fast image interpolation via random forests.
Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui
2015-10-01
This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.
Virtual Reality Glasses and "Eye-Hands Blind Technique" for Microsurgical Training in Neurosurgery.
Choque-Velasquez, Joham; Colasanti, Roberto; Collan, Juhani; Kinnunen, Riina; Rezai Jahromi, Behnam; Hernesniemi, Juha
2018-04-01
Microsurgical skills and eye-hand coordination need continuous training to be developed and refined. However, well-equipped microsurgical laboratories are not so widespread as their setup is expensive. Herein, we present a novel microsurgical training system that requires a high-resolution personal computer screen, smartphones, and virtual reality glasses. A smartphone placed on a holder at a height of about 15-20 cm from the surgical target field is used as the webcam of the computer. A specific software is used to duplicate the video camera image. The video may be transferred from the computer to another smartphone, which may be connected to virtual reality glasses. Using the previously described training model, we progressively performed more and more complex microsurgical exercises. It did not take long to set up our system, thus saving time for the training sessions. Our proposed training model may represent an affordable and efficient system to improve eye-hand coordination and dexterity in using not only the operating microscope but also endoscopes and exoscopes. Copyright © 2018 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, William D; Johansen, Hans; Evans, Katherine J
We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy andmore » fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less
Modelling tidal current energy extraction in large area using a three-dimensional estuary model
NASA Astrophysics Data System (ADS)
Chen, Yaling; Lin, Binliang; Lin, Jie
2014-11-01
This paper presents a three-dimensional modelling study for simulating tidal current energy extraction in large areas, with a momentum sink term being added into the momentum equations. Due to the limits of computational capacity, the grid size of the numerical model is generally much larger than the turbine rotor diameter. Two models, i.e. a local grid refinement model and a coarse grid model, are employed and an idealized estuary is set up. The local grid refinement model is constructed to simulate the power generation of an isolated turbine and its impacts on hydrodynamics. The model is then used to determine the deployment of turbine farm and quantify a combined thrust coefficient for multiple turbines located in a grid element of coarse grid model. The model results indicate that the performance of power extraction is affected by array deployment, with more power generation from outer rows than inner rows due to velocity deficit influence of upstream turbines. Model results also demonstrate that the large-scale turbine farm has significant effects on the hydrodynamics. The tidal currents are attenuated within the turbine swept area, and both upstream and downstream of the array. While the currents are accelerated above and below turbines, which is contributed to speeding up the wake mixing process behind the arrays. The water levels are heightened in both low and high water levels as the turbine array spanning the full width of estuary. The magnitude of water level change is found to increase with the array expansion, especially at the low water level.
Refining Pragmatically-Appropriate Oral Communication via Computer-Simulated Conversations
ERIC Educational Resources Information Center
Sydorenko, Tetyana; Daurio, Phoebe; Thorne, Steven L.
2018-01-01
To address the problem of limited opportunities for practicing second language speaking in interaction, especially delicate interactions requiring pragmatic competence, we describe computer simulations designed for the oral practice of extended pragmatic routines and report on the affordances of such simulations for learning pragmatically…
Computational Insights into the O2-evolving complex of photosystem II
Sproviero, Eduardo M.; McEvoy, James P.; Gascón, José A.; Brudvig, Gary W.; Batista, Victor S.
2009-01-01
Mechanistic investigations of the water-splitting reaction of the oxygen-evolving complex (OEC) of photosystem II (PSII) are fundamentally informed by structural studies. Many physical techniques have provided important insights into the OEC structure and function, including X-ray diffraction (XRD) and extended X-ray absorption fine structure (EXAFS) spectroscopy as well as mass spectrometry (MS), electron paramagnetic resonance (EPR) spectroscopy and Fourier transform infrared spectroscopy applied in conjunction with mutagenesis studies. However, experimental studies have yet to yield consensus as to the exact configuration of the catalytic metal cluster and its ligation scheme. Computational modeling studies, including density functional (DFT) theory combined with quantum mechanics/molecular mechanics (QM/MM) hybrid methods for explicitly including the influence of the surrounding protein, have proposed chemically satisfactory models of the fully ligated OEC within PSII that are maximally consistent with experimental results. The inorganic core of these models is similar to the crystallographic model upon which they were based but comprises important modifications due to structural refinement, hydration and proteinaceous ligation which improve agreement with a wide range of experimental data. The computational models are useful for rationalizing spectroscopic and crystallographic results and for building a complete structure-based mechanism of water-splitting in PSII as described by the intermediate oxidation states of the OEC. This review summarizes these recent advances in QM/MM modeling of PSII within the context of recent experimental studies. PMID:18483777
Empirical comparison of heuristic load distribution in point-to-point multicomputer networks
NASA Technical Reports Server (NTRS)
Grunwald, Dirk C.; Nazief, Bobby A. A.; Reed, Daniel A.
1990-01-01
The study compared several load placement algorithms using instrumented programs and synthetic program models. Salient characteristics of these program traces (total computation time, total number of messages sent, and average message time) span two orders of magnitude. Load distribution algorithms determine the initial placement for processes, a precursor to the more general problem of load redistribution. It is found that desirable workload distribution strategies will place new processes globally, rather than locally, to spread processes rapidly, but that local information should be used to refine global placement.
Characterization of Arcjet Flows Using Laser-Induced Fluorescence
NASA Technical Reports Server (NTRS)
Bamford, Douglas J.; O'Keefe, Anthony; Babikian, Dikran S.; Stewart, David A.; Strawa, Anthony W.
1995-01-01
A sensor based on laser-induced fluorescence has been installed at the 20-MW NASA Ames Aerodynamic Heating Facility. The sensor has provided new, quantitative, real-time information about properties of the arcjet flow in the highly dissociated, partially ionized, nonequilibrium regime. Number densities of atomic oxygen, flow velocities, heavy particle translational temperatures, and collisional quenching rates have been measured. These results have been used to test and refine computational models of the arcjet flow. The calculated number densities, translational temperatures, and flow velocities are in moderately good agreement with experiment
Advances in the computation of transonic separated flows over finite wings
NASA Technical Reports Server (NTRS)
Kaynak, Unver; Flores, Jolen
1989-01-01
Problems encountered in numerical simulations of transonic wind-tunnel experiments with low-aspect-ratio wings are surveyed and illustrated. The focus is on the zonal Euler/Navier-Stokes program developed by Holst et al. (1985) and its application to shock-induced separation. The physical basis and numerical implementation of the method are reviewed, and results are presented from studies of the effects of artificial dissipation, boundary conditions, grid refinement, the turbulence model, and geometry representation on the simulation accuracy. Extensive graphs and diagrams and typical flow visualizations are provided.
Scheduling periodic jobs that allow imprecise results
NASA Technical Reports Server (NTRS)
Chung, Jen-Yao; Liu, Jane W. S.; Lin, Kwei-Jay
1990-01-01
The problem of scheduling periodic jobs in hard real-time systems that support imprecise computations is discussed. Two workload models of imprecise computations are presented. These models differ from traditional models in that a task may be terminated any time after it has produced an acceptable result. Each task is logically decomposed into a mandatory part followed by an optional part. In a feasible schedule, the mandatory part of every task is completed before the deadline of the task. The optional part refines the result produced by the mandatory part to reduce the error in the result. Applications are classified as type N and type C, according to undesirable effects of errors. The two workload models characterize the two types of applications. The optional parts of the tasks in an N job need not ever be completed. The resulting quality of each type-N job is measured in terms of the average error in the results over several consecutive periods. A class of preemptive, priority-driven algorithms that leads to feasible schedules with small average error is described and evaluated.
Studies of Trace Gas Chemical Cycles Using Inverse Methods and Global Chemical Transport Models
NASA Technical Reports Server (NTRS)
Prinn, Ronald G.
2003-01-01
We report progress in the first year, and summarize proposed work for the second year of the three-year dynamical-chemical modeling project devoted to: (a) development, testing, and refining of inverse methods for determining regional and global transient source and sink strengths for long lived gases important in ozone depletion and climate forcing, (b) utilization of inverse methods to determine these source/sink strengths using either MATCH (Model for Atmospheric Transport and Chemistry) which is based on analyzed observed wind fields or back-trajectories computed from these wind fields, (c) determination of global (and perhaps regional) average hydroxyl radical concentrations using inverse methods with multiple titrating gases, and (d) computation of the lifetimes and spatially resolved destruction rates of trace gases using 3D models. Important goals include determination of regional source strengths of methane, nitrous oxide, methyl bromide, and other climatically and chemically important biogenic/anthropogenic trace gases and also of halocarbons restricted by the Montreal protocol and its follow-on agreements and hydrohalocarbons now used as alternatives to the restricted halocarbons.
Effects of Combined Stellar Feedback on Star Formation in Stellar Clusters
NASA Astrophysics Data System (ADS)
Wall, Joshua Edward; McMillan, Stephen; Pellegrino, Andrew; Mac Low, Mordecai; Klessen, Ralf; Portegies Zwart, Simon
2018-01-01
We present results of hybrid MHD+N-body simulations of star cluster formation and evolution including self consistent feedback from the stars in the form of radiation, winds, and supernovae from all stars more massive than 7 solar masses. The MHD is modeled with the adaptive mesh refinement code FLASH, while the N-body computations are done with a direct algorithm. Radiation is modeled using ray tracing along long characteristics in directions distributed using the HEALPIX algorithm, and causes ionization and momentum deposition, while winds and supernova conserve momentum and energy during injection. Stellar evolution is followed using power-law fits to evolution models in SeBa. We use a gravity bridge within the AMUSE framework to couple the N-body dynamics of the stars to the gas dynamics in FLASH. Feedback from the massive stars alters the structure of young clusters as gas ejection occurs. We diagnose this behavior by distinguishing between fractal distribution and central clustering using a Q parameter computed from the minimum spanning tree of each model cluster. Global effects of feedback in our simulations will also be discussed.
Field-sensitivity To Rheological Parameters
NASA Astrophysics Data System (ADS)
Freund, Jonathan; Ewoldt, Randy
2017-11-01
We ask this question: where in a flow is a quantity of interest Q quantitatively sensitive to the model parameters θ-> describing the rheology of the fluid? This field sensitivity is computed via the numerical solution of the adjoint flow equations, as developed to expose the target sensitivity δQ / δθ-> (x) via the constraint of satisfying the flow equations. Our primary example is a sphere settling in Carbopol, for which we have experimental data. For this Carreau-model configuration, we simultaneously calculate how much a local change in the fluid intrinsic time-scale λ, limit-viscosities ηo and η∞, and exponent n would affect the drag D. Such field sensitivities can show where different fluid physics in the model (time scales, elastic versus viscous components, etc.) are important for the target observable and generally guide model refinement based on predictive goals. In this case, the computational cost of solving the local sensitivity problem is negligible relative to the flow. The Carreau-fluid/sphere example is illustrative; the utility of field sensitivity is in the design and analysis of less intuitive flows, for which we provide some additional examples.
Realistic inversion of diffraction data for an amorphous solid: The case of amorphous silicon
NASA Astrophysics Data System (ADS)
Pandey, Anup; Biswas, Parthapratim; Bhattarai, Bishal; Drabold, D. A.
2016-12-01
We apply a method called "force-enhanced atomic refinement" (FEAR) to create a computer model of amorphous silicon (a -Si) based upon the highly precise x-ray diffraction experiments of Laaziri et al. [Phys. Rev. Lett. 82, 3460 (1999), 10.1103/PhysRevLett.82.3460]. The logic underlying our calculation is to estimate the structure of a real sample a -Si using experimental data and chemical information included in a nonbiased way, starting from random coordinates. The model is in close agreement with experiment and also sits at a suitable energy minimum according to density-functional calculations. In agreement with experiments, we find a small concentration of coordination defects that we discuss, including their electronic consequences. The gap states in the FEAR model are delocalized compared to a continuous random network model. The method is more efficient and accurate, in the sense of fitting the diffraction data, than conventional melt-quench methods. We compute the vibrational density of states and the specific heat, and we find that both compare favorably to experiments.
Power flow prediction in vibrating systems via model reduction
NASA Astrophysics Data System (ADS)
Li, Xianhui
This dissertation focuses on power flow prediction in vibrating systems. Reduced order models (ROMs) are built based on rational Krylov model reduction which preserve power flow information in the original systems over a specified frequency band. Stiffness and mass matrices of the ROMs are obtained by projecting the original system matrices onto the subspaces spanned by forced responses. A matrix-free algorithm is designed to construct ROMs directly from the power quantities at selected interpolation frequencies. Strategies for parallel implementation of the algorithm via message passing interface are proposed. The quality of ROMs is iteratively refined according to the error estimate based on residual norms. Band capacity is proposed to provide a priori estimate of the sizes of good quality ROMs. Frequency averaging is recast as ensemble averaging and Cauchy distribution is used to simplify the computation. Besides model reduction for deterministic systems, details of constructing ROMs for parametric and nonparametric random systems are also presented. Case studies have been conducted on testbeds from Harwell-Boeing collections. Input and coupling power flow are computed for the original systems and the ROMs. Good agreement is observed in all cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thrall, Brian D.; Minard, Kevin R.; Teeguarden, Justin G.
A Cooperative Research and Development Agreement (CRADA) was sponsored by Battelle Memorial Institute (Battelle, Columbus), to initiate a collaborative research program across multiple Department of Energy (DOE) National Laboratories aimed at developing a suite of new capabilities for predictive toxicology. Predicting the potential toxicity of emerging classes of engineered nanomaterials was chosen as one of two focusing problems for this program. PNNL’s focus toward this broader goal was to refine and apply experimental and computational tools needed to provide quantitative understanding of nanoparticle dosimetry for in vitro cell culture systems, which is necessary for comparative risk estimates for different nanomaterialsmore » or biological systems. Research conducted using lung epithelial and macrophage cell models successfully adapted magnetic particle detection and fluorescent microscopy technologies to quantify uptake of various forms of engineered nanoparticles, and provided experimental constraints and test datasets for benchmark comparison against results obtained using an in vitro computational dosimetry model, termed the ISSD model. The experimental and computational approaches developed were used to demonstrate how cell dosimetry is applied to aid in interpretation of genomic studies of nanoparticle-mediated biological responses in model cell culture systems. The combined experimental and theoretical approach provides a highly quantitative framework for evaluating relationships between biocompatibility of nanoparticles and their physical form in a controlled manner.« less
A parabolic velocity-decomposition method for wind turbines
NASA Astrophysics Data System (ADS)
Mittal, Anshul; Briley, W. Roger; Sreenivas, Kidambi; Taylor, Lafayette K.
2017-02-01
An economical parabolized Navier-Stokes approximation for steady incompressible flow is combined with a compatible wind turbine model to simulate wind turbine flows, both upstream of the turbine and in downstream wake regions. The inviscid parabolizing approximation is based on a Helmholtz decomposition of the secondary velocity vector and physical order-of-magnitude estimates, rather than an axial pressure gradient approximation. The wind turbine is modeled by distributed source-term forces incorporating time-averaged aerodynamic forces generated by a blade-element momentum turbine model. A solution algorithm is given whose dependent variables are streamwise velocity, streamwise vorticity, and pressure, with secondary velocity determined by two-dimensional scalar and vector potentials. In addition to laminar and turbulent boundary-layer test cases, solutions for a streamwise vortex-convection test problem are assessed by mesh refinement and comparison with Navier-Stokes solutions using the same grid. Computed results for a single turbine and a three-turbine array are presented using the NREL offshore 5-MW baseline wind turbine. These are also compared with an unsteady Reynolds-averaged Navier-Stokes solution computed with full rotor resolution. On balance, the agreement in turbine wake predictions for these test cases is very encouraging given the substantial differences in physical modeling fidelity and computer resources required.
An efficient algorithm for global periodic orbits generation near irregular-shaped asteroids
NASA Astrophysics Data System (ADS)
Shang, Haibin; Wu, Xiaoyu; Ren, Yuan; Shan, Jinjun
2017-07-01
Periodic orbits (POs) play an important role in understanding dynamical behaviors around natural celestial bodies. In this study, an efficient algorithm was presented to generate the global POs around irregular-shaped uniformly rotating asteroids. The algorithm was performed in three steps, namely global search, local refinement, and model continuation. First, a mascon model with a low number of particles and optimized mass distribution was constructed to remodel the exterior gravitational potential of the asteroid. Using this model, a multi-start differential evolution enhanced with a deflection strategy with strong global exploration and bypassing abilities was adopted. This algorithm can be regarded as a search engine to find multiple globally optimal regions in which potential POs were located. This was followed by applying a differential correction to locally refine global search solutions and generate the accurate POs in the mascon model in which an analytical Jacobian matrix was derived to improve convergence. Finally, the concept of numerical model continuation was introduced and used to convert the POs from the mascon model into a high-fidelity polyhedron model by sequentially correcting the initial states. The efficiency of the proposed algorithm was substantiated by computing the global POs around an elongated shoe-shaped asteroid 433 Eros. Various global POs with different topological structures in the configuration space were successfully located. Specifically, the proposed algorithm was generic and could be conveniently extended to explore periodic motions in other gravitational systems.
Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin
2015-11-01
Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Jošt, D.; Škerlavaj, A.; Morgut, M.; Mežnar, P.; Nobile, E.
2015-01-01
The paper presents numerical simulations of flow in a model of a high head Francis turbine and comparison of results to the measurements. Numerical simulations were done by two CFD (Computational Fluid Dynamics) codes, Ansys CFX and OpenFOAM. Steady-state simulations were performed by k-epsilon and SST model, while for transient simulations the SAS SST ZLES model was used. With proper grid refinement in distributor and runner and with taking into account losses in labyrinth seals very accurate prediction of torque on the shaft, head and efficiency was obtained. Calculated axial and circumferential velocity components on two planes in the draft tube matched well with experimental results.
Holland, Chris [UC San Diego, San Diego, California, United States
2017-12-09
The upcoming ITER experiment (www.iter.org) represents the next major milestone in realizing the promise of using nuclear fusion as a commercial energy source, by moving into the âburning plasmaâ regime where the dominant heat source is the internal fusion reactions. As part of its support for the ITER mission, the US fusion community is actively developing validated predictive models of the behavior of magnetically confined plasmas. In this talk, I will describe how the plasma community is using the latest high performance computing facilities to develop and refine our models of the nonlinear, multiscale plasma dynamics, and how recent advances in experimental diagnostics are allowing us to directly test and validate these models at an unprecedented level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Michael K.; Davidson, Megan
As part of Sandia’s nuclear deterrence mission, the B61-12 Life Extension Program (LEP) aims to modernize the aging weapon system. Modernization requires requalification and Sandia is using high performance computing to perform advanced computational simulations to better understand, evaluate, and verify weapon system performance in conjunction with limited physical testing. The Nose Bomb Subassembly (NBSA) of the B61-12 is responsible for producing a fuzing signal upon ground impact. The fuzing signal is dependent upon electromechanical impact sensors producing valid electrical fuzing signals at impact. Computer generated models were used to assess the timing between the impact sensor’s response to themore » deceleration of impact and damage to major components and system subassemblies. The modeling and simulation team worked alongside the physical test team to design a large-scale reverse ballistic test to not only assess system performance, but to also validate their computational models. The reverse ballistic test conducted at Sandia’s sled test facility sent a rocket sled with a representative target into a stationary B61-12 (NBSA) to characterize the nose crush and functional response of NBSA components. Data obtained from data recorders and high-speed photometrics were integrated with previously generated computer models in order to refine and validate the model’s ability to reliably simulate real-world effects. Large-scale tests are impractical to conduct for every single impact scenario. By creating reliable computer models, we can perform simulations that identify trends and produce estimates of outcomes over the entire range of required impact conditions. Sandia’s HPCs enable geometric resolution that was unachievable before, allowing for more fidelity and detail, and creating simulations that can provide insight to support evaluation of requirements and performance margins. As computing resources continue to improve, researchers at Sandia are hoping to improve these simulations so they provide increasingly credible analysis of the system response and performance over the full range of conditions.« less
Cognitive diagnosis modelling incorporating item response times.
Zhan, Peida; Jiao, Hong; Liao, Dandan
2018-05-01
To provide more refined diagnostic feedback with collateral information in item response times (RTs), this study proposed joint modelling of attributes and response speed using item responses and RTs simultaneously for cognitive diagnosis. For illustration, an extended deterministic input, noisy 'and' gate (DINA) model was proposed for joint modelling of responses and RTs. Model parameter estimation was explored using the Bayesian Markov chain Monte Carlo (MCMC) method. The PISA 2012 computer-based mathematics data were analysed first. These real data estimates were treated as true values in a subsequent simulation study. A follow-up simulation study with ideal testing conditions was conducted as well to further evaluate model parameter recovery. The results indicated that model parameters could be well recovered using the MCMC approach. Further, incorporating RTs into the DINA model would improve attribute and profile correct classification rates and result in more accurate and precise estimation of the model parameters. © 2017 The British Psychological Society.
The set of commercially available chemical substances in commerce that may have significant global warming potential (GWP) is not well defined. Although there are currently over 200 chemicals with high GWP reported by the Intergovernmental Panel on Climate Change, World Meteorological Organization, or Environmental Protection Agency, there may be hundreds of additional chemicals that may also have significant GWP. Evaluation of various approaches to estimate radiative efficiency (RE) and atmospheric lifetime will help to refine GWP estimates for compounds where no measured IR spectrum is available. This study compares values of RE calculated using computational chemistry techniques for 235 chemical compounds against the best available values. It is important to assess the reliability of the underlying computational methods for computing RE to understand the sources of deviations from the best available values. Computed vibrational frequency data is used to estimate RE values using several Pinnock-type models. The values derived using these models are found to be in reasonable agreement with reported RE values (though significant improvement is obtained through scaling). The effect of varying the computational method and basis set used to calculate the frequency data is also discussed. It is found that the vibrational intensities have a strong dependence on basis set and are largely responsible for differences in computed values of RE in this study. Deviations of
NASA Astrophysics Data System (ADS)
van Herck, Walter; Wyder, Thomas
2010-04-01
The enumeration of BPS bound states in string theory needs refinement. Studying partition functions of particles made from D-branes wrapped on algebraic Calabi-Yau 3-folds, and classifying states using split attractor flow trees, we extend the method for computing a refined BPS index, [1]. For certain D-particles, a finite number of microstates, namely polar states, exclusively realized as bound states, determine an entire partition function (elliptic genus). This underlines their crucial importance: one might call them the ‘chromosomes’ of a D-particle or a black hole. As polar states also can be affected by our refinement, previous predictions on elliptic genera are modified. This can be metaphorically interpreted as ‘crossing-over in the meiosis of a D-particle’. Our results improve on [2], provide non-trivial evidence for a strong split attractor flow tree conjecture, and thus suggest that we indeed exhaust the BPS spectrum. In the D-brane description of a bound state, the necessity for refinement results from the fact that tachyonic strings split up constituent states into ‘generic’ and ‘special’ states. These are enumerated separately by topological invariants, which turn out to be partitions of Donaldson-Thomas invariants. As modular predictions provide a check on many of our results, we have compelling evidence that our computations are correct.