NASA Astrophysics Data System (ADS)
Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.
2015-09-01
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).
Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A
2015-09-18
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979
NASA Astrophysics Data System (ADS)
Hedrick, A. R.; Marks, D. G.; Winstral, A. H.; Marshall, H. P.
2014-12-01
The ability to forecast snow water equivalent, or SWE, in mountain catchments would benefit many different communities ranging from avalanche hazard mitigation to water resource management. Historical model runs of Isnobal, the physically based energy balance snow model, have been produced over the 2150 km2 Boise River Basin for water years 2012 - 2014 at 100-meter resolution. Spatially distributed forcing parameters such as precipitation, wind, and relative humidity are generated from automated weather stations located throughout the watershed, and are supplied to Isnobal at hourly timesteps. Similarly, the Weather Research & Forecasting (WRF) Model provides hourly predictions of the same forcing parameters from an atmospheric physics perspective. This work aims to quantitatively compare WRF model output to the spatial meteorologic fields developed to force Isnobal, with the hopes of eventually using WRF predictions to create accurate hourly forecasts of SWE over a large mountainous basin.
Accurate numerical solutions of conservative nonlinear oscillators
NASA Astrophysics Data System (ADS)
Khan, Najeeb Alam; Nasir Uddin, Khan; Nadeem Alam, Khan
2014-12-01
The objective of this paper is to present an investigation to analyze the vibration of a conservative nonlinear oscillator in the form u" + lambda u + u^(2n-1) + (1 + epsilon^2 u^(4m))^(1/2) = 0 for any arbitrary power of n and m. This method converts the differential equation to sets of algebraic equations and solve numerically. We have presented for three different cases: a higher order Duffing equation, an equation with irrational restoring force and a plasma physics equation. It is also found that the method is valid for any arbitrary order of n and m. Comparisons have been made with the results found in the literature the method gives accurate results.
Predict amine solution properties accurately
Cheng, S.; Meisen, A.; Chakma, A.
1996-02-01
Improved process design begins with using accurate physical property data. Especially in the preliminary design stage, physical property data such as density viscosity, thermal conductivity and specific heat can affect the overall performance of absorbers, heat exchangers, reboilers and pump. These properties can also influence temperature profiles in heat transfer equipment and thus control or affect the rate of amine breakdown. Aqueous-amine solution physical property data are available in graphical form. However, it is not convenient to use with computer-based calculations. Developed equations allow improved correlations of derived physical property estimates with published data. Expressions are given which can be used to estimate physical properties of methyldiethanolamine (MDEA), monoethanolamine (MEA) and diglycolamine (DGA) solutions.
New model accurately predicts reformate composition
Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )
1994-01-31
Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.
NASA Technical Reports Server (NTRS)
Graves, R. A., Jr.
1975-01-01
The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.
Accurate Prediction of Docked Protein Structure Similarity.
Akbal-Delibas, Bahar; Pomplun, Marc; Haspel, Nurit
2015-09-01
One of the major challenges for protein-protein docking methods is to accurately discriminate nativelike structures. The protein docking community agrees on the existence of a relationship between various favorable intermolecular interactions (e.g. Van der Waals, electrostatic, desolvation forces, etc.) and the similarity of a conformation to its native structure. Different docking algorithms often formulate this relationship as a weighted sum of selected terms and calibrate their weights against specific training data to evaluate and rank candidate structures. However, the exact form of this relationship is unknown and the accuracy of such methods is impaired by the pervasiveness of false positives. Unlike the conventional scoring functions, we propose a novel machine learning approach that not only ranks the candidate structures relative to each other but also indicates how similar each candidate is to the native conformation. We trained the AccuRMSD neural network with an extensive dataset using the back-propagation learning algorithm. Our method achieved predicting RMSDs of unbound docked complexes with 0.4Å error margin. PMID:26335807
Predicting accurate probabilities with a ranking loss
Menon, Aditya Krishna; Jiang, Xiaoqian J; Vembu, Shankar; Elkan, Charles; Ohno-Machado, Lucila
2013-01-01
In many real-world applications of machine learning classifiers, it is essential to predict the probability of an example belonging to a particular class. This paper proposes a simple technique for predicting probabilities based on optimizing a ranking loss, followed by isotonic regression. This semi-parametric technique offers both good ranking and regression performance, and models a richer set of probability distributions than statistical workhorses such as logistic regression. We provide experimental results that show the effectiveness of this technique on real-world applications of probability prediction. PMID:25285328
You Can Accurately Predict Land Acquisition Costs.
ERIC Educational Resources Information Center
Garrigan, Richard
1967-01-01
Land acquisition costs were tested for predictability based upon the 1962 assessed valuations of privately held land acquired for campus expansion by the University of Wisconsin from 1963-1965. By correlating the land acquisition costs of 108 properties acquired during the 3 year period with--(1) the assessed value of the land, (2) the assessed…
Accurate complex scaling of three dimensional numerical potentials
Cerioni, Alessandro; Genovese, Luigi; Duchemin, Ivan; Deutsch, Thierry
2013-05-28
The complex scaling method, which consists in continuing spatial coordinates into the complex plane, is a well-established method that allows to compute resonant eigenfunctions of the time-independent Schroedinger operator. Whenever it is desirable to apply the complex scaling to investigate resonances in physical systems defined on numerical discrete grids, the most direct approach relies on the application of a similarity transformation to the original, unscaled Hamiltonian. We show that such an approach can be conveniently implemented in the Daubechies wavelet basis set, featuring a very promising level of generality, high accuracy, and no need for artificial convergence parameters. Complex scaling of three dimensional numerical potentials can be efficiently and accurately performed. By carrying out an illustrative resonant state computation in the case of a one-dimensional model potential, we then show that our wavelet-based approach may disclose new exciting opportunities in the field of computational non-Hermitian quantum mechanics.
A new generalized correlation for accurate vapor pressure prediction
NASA Astrophysics Data System (ADS)
An, Hui; Yang, Wenming
2012-08-01
An accurate knowledge of the vapor pressure of organic liquids is very important for the oil and gas processing operations. In combustion modeling, the accuracy of numerical predictions is also highly dependent on the fuel properties such as vapor pressure. In this Letter, a new generalized correlation is proposed based on the Lee-Kesler's method where a fuel dependent parameter 'A' is introduced. The proposed method only requires the input parameters of critical temperature, normal boiling temperature and the acentric factor of the fluid. With this method, vapor pressures have been calculated and compared with the data reported in data compilation for 42 organic liquids over 1366 data points, and the overall average absolute percentage deviation is only 1.95%.
Fast and accurate predictions of covalent bonds in chemical space.
Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole
2016-05-01
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi
Fast and accurate predictions of covalent bonds in chemical space
NASA Astrophysics Data System (ADS)
Chang, K. Y. Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole
2016-05-01
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (˜1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H 2+ . Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi
Evaluation of wave runup predictions from numerical and parametric models
Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.
2014-01-01
Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.
Numerical ability predicts mortgage default.
Gerardi, Kristopher; Goette, Lorenz; Meier, Stephan
2013-07-01
Unprecedented levels of US subprime mortgage defaults precipitated a severe global financial crisis in late 2008, plunging much of the industrialized world into a deep recession. However, the fundamental reasons for why US mortgages defaulted at such spectacular rates remain largely unknown. This paper presents empirical evidence showing that the ability to perform basic mathematical calculations is negatively associated with the propensity to default on one's mortgage. We measure several aspects of financial literacy and cognitive ability in a survey of subprime mortgage borrowers who took out loans in 2006 and 2007, and match them to objective, detailed administrative data on mortgage characteristics and payment histories. The relationship between numerical ability and mortgage default is robust to controlling for a broad set of sociodemographic variables, and is not driven by other aspects of cognitive ability. We find no support for the hypothesis that numerical ability impacts mortgage outcomes through the choice of the mortgage contract. Rather, our results suggest that individuals with limited numerical ability default on their mortgage due to behavior unrelated to the initial choice of their mortgage. PMID:23798401
Numerical ability predicts mortgage default
Gerardi, Kristopher; Goette, Lorenz; Meier, Stephan
2013-01-01
Unprecedented levels of US subprime mortgage defaults precipitated a severe global financial crisis in late 2008, plunging much of the industrialized world into a deep recession. However, the fundamental reasons for why US mortgages defaulted at such spectacular rates remain largely unknown. This paper presents empirical evidence showing that the ability to perform basic mathematical calculations is negatively associated with the propensity to default on one’s mortgage. We measure several aspects of financial literacy and cognitive ability in a survey of subprime mortgage borrowers who took out loans in 2006 and 2007, and match them to objective, detailed administrative data on mortgage characteristics and payment histories. The relationship between numerical ability and mortgage default is robust to controlling for a broad set of sociodemographic variables, and is not driven by other aspects of cognitive ability. We find no support for the hypothesis that numerical ability impacts mortgage outcomes through the choice of the mortgage contract. Rather, our results suggest that individuals with limited numerical ability default on their mortgage due to behavior unrelated to the initial choice of their mortgage. PMID:23798401
Fast and Accurate Learning When Making Discrete Numerical Estimates.
Sanborn, Adam N; Beierholm, Ulrik R
2016-04-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
Fast and Accurate Learning When Making Discrete Numerical Estimates
Sanborn, Adam N.; Beierholm, Ulrik R.
2016-01-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
Towards numerical prediction of cavitation erosion.
Fivel, Marc; Franc, Jean-Pierre; Chandra Roy, Samir
2015-10-01
This paper is intended to provide a potential basis for a numerical prediction of cavitation erosion damage. The proposed method can be divided into two steps. The first step consists in determining the loading conditions due to cavitation bubble collapses. It is shown that individual pits observed on highly polished metallic samples exposed to cavitation for a relatively small time can be considered as the signature of bubble collapse. By combining pitting tests with an inverse finite-element modelling (FEM) of the material response to a representative impact load, loading conditions can be derived for each individual bubble collapse in terms of stress amplitude (in gigapascals) and radial extent (in micrometres). This step requires characterizing as accurately as possible the properties of the material exposed to cavitation. This characterization should include the effect of strain rate, which is known to be high in cavitation erosion (typically of the order of several thousands s(-1)). Nanoindentation techniques as well as compressive tests at high strain rate using, for example, a split Hopkinson pressure bar test system may be used. The second step consists in developing an FEM approach to simulate the material response to the repetitive impact loads determined in step 1. This includes a detailed analysis of the hardening process (isotropic versus kinematic) in order to properly account for fatigue as well as the development of a suitable model of material damage and failure to account for mass loss. Although the whole method is not yet fully operational, promising results are presented that show that such a numerical method might be, in the long term, an alternative to correlative techniques used so far for cavitation erosion prediction. PMID:26442139
The development of accurate and efficient methods of numerical quadrature
NASA Technical Reports Server (NTRS)
Feagin, T.
1973-01-01
Some new methods for performing numerical quadrature of an integrable function over a finite interval are described. Each method provides a sequence of approximations of increasing order to the value of the integral. Each approximation makes use of all previously computed values of the integrand. The points at which new values of the integrand are computed are selected in such a way that the order of the approximation is maximized. The methods are compared with the quadrature methods of Clenshaw and Curtis, Gauss, Patterson, and Romberg using several examples.
Accurate numerical solution of compressible, linear stability equations
NASA Technical Reports Server (NTRS)
Malik, M. R.; Chuang, S.; Hussaini, M. Y.
1982-01-01
The present investigation is concerned with a fourth order accurate finite difference method and its application to the study of the temporal and spatial stability of the three-dimensional compressible boundary layer flow on a swept wing. This method belongs to the class of compact two-point difference schemes discussed by White (1974) and Keller (1974). The method was apparently first used for solving the two-dimensional boundary layer equations. Attention is given to the governing equations, the solution technique, and the search for eigenvalues. A general purpose subroutine is employed for solving a block tridiagonal system of equations. The computer time can be reduced significantly by exploiting the special structure of two matrices.
The quiet revolution of numerical weather prediction
NASA Astrophysics Data System (ADS)
Bauer, Peter; Thorpe, Alan; Brunet, Gilbert
2015-09-01
Advances in numerical weather prediction represent a quiet revolution because they have resulted from a steady accumulation of scientific knowledge and technological advances over many years that, with only a few exceptions, have not been associated with the aura of fundamental physics breakthroughs. Nonetheless, the impact of numerical weather prediction is among the greatest of any area of physical science. As a computational problem, global weather prediction is comparable to the simulation of the human brain and of the evolution of the early Universe, and it is performed every day at major operational centres across the world.
The predictability problems in numerical weather and climate prediction
NASA Astrophysics Data System (ADS)
Mu, Mu; Wansuo, Duan; Jiacheng, Wang
2002-03-01
The uncertainties caused by the errors of the initial states and the parameters in the numerical model are investigated. Three problems of predictability in numerical weather and climate prediction are proposed, which are related to the maximum predictable time, the maximum prediction error, and the maximum admissible errors of the initial values and the parameters in the model respectively. The three problems are then formulated into nonlinear optimization problems. Effective approaches to deal with these nonlinear optimization problems are provided. The Lorenz’ model is employed to demonstrate how to use these ideas in dealing with these three problems.
Inverter Modeling For Accurate Energy Predictions Of Tracking HCPV Installations
NASA Astrophysics Data System (ADS)
Bowman, J.; Jensen, S.; McDonald, Mark
2010-10-01
High efficiency high concentration photovoltaic (HCPV) solar plants of megawatt scale are now operational, and opportunities for expanded adoption are plentiful. However, effective bidding for sites requires reliable prediction of energy production. HCPV module nameplate power is rated for specific test conditions; however, instantaneous HCPV power varies due to site specific irradiance and operating temperature, and is degraded by soiling, protective stowing, shading, and electrical connectivity. These factors interact with the selection of equipment typically supplied by third parties, e.g., wire gauge and inverters. We describe a time sequence model accurately accounting for these effects that predicts annual energy production, with specific reference to the impact of the inverter on energy output and interactions between system-level design decisions and the inverter. We will also show two examples, based on an actual field design, of inverter efficiency calculations and the interaction between string arrangements and inverter selection.
Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates
Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; et al
2013-03-07
In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of chargedmore » peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.« less
Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates
Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; Rose, Kristie L.; Tabb, David L.
2013-03-07
In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of charged peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.
A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction
NASA Technical Reports Server (NTRS)
Bockelie, Michael J.; Eiseman, Peter R.
1990-01-01
A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.
Passive samplers accurately predict PAH levels in resident crayfish.
Paulik, L Blair; Smith, Brian W; Bergmann, Alan J; Sower, Greg J; Forsberg, Norman D; Teeguarden, Justin G; Anderson, Kim A
2016-02-15
Contamination of resident aquatic organisms is a major concern for environmental risk assessors. However, collecting organisms to estimate risk is often prohibitively time and resource-intensive. Passive sampling accurately estimates resident organism contamination, and it saves time and resources. This study used low density polyethylene (LDPE) passive water samplers to predict polycyclic aromatic hydrocarbon (PAH) levels in signal crayfish, Pacifastacus leniusculus. Resident crayfish were collected at 5 sites within and outside of the Portland Harbor Superfund Megasite (PHSM) in the Willamette River in Portland, Oregon. LDPE deployment was spatially and temporally paired with crayfish collection. Crayfish visceral and tail tissue, as well as water-deployed LDPE, were extracted and analyzed for 62 PAHs using GC-MS/MS. Freely-dissolved concentrations (Cfree) of PAHs in water were calculated from concentrations in LDPE. Carcinogenic risks were estimated for all crayfish tissues, using benzo[a]pyrene equivalent concentrations (BaPeq). ∑PAH were 5-20 times higher in viscera than in tails, and ∑BaPeq were 6-70 times higher in viscera than in tails. Eating only tail tissue of crayfish would therefore significantly reduce carcinogenic risk compared to also eating viscera. Additionally, PAH levels in crayfish were compared to levels in crayfish collected 10 years earlier. PAH levels in crayfish were higher upriver of the PHSM and unchanged within the PHSM after the 10-year period. Finally, a linear regression model predicted levels of 34 PAHs in crayfish viscera with an associated R-squared value of 0.52 (and a correlation coefficient of 0.72), using only the Cfree PAHs in water. On average, the model predicted PAH concentrations in crayfish tissue within a factor of 2.4 ± 1.8 of measured concentrations. This affirms that passive water sampling accurately estimates PAH contamination in crayfish. Furthermore, the strong predictive ability of this simple model suggests
Mouse models of human AML accurately predict chemotherapy response
Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.
2009-01-01
The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691
Mouse models of human AML accurately predict chemotherapy response.
Zuber, Johannes; Radtke, Ina; Pardee, Timothy S; Zhao, Zhen; Rappaport, Amy R; Luo, Weijun; McCurrach, Mila E; Yang, Miao-Miao; Dolan, M Eileen; Kogan, Scott C; Downing, James R; Lowe, Scott W
2009-04-01
The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691
Turbulence Models for Accurate Aerothermal Prediction in Hypersonic Flows
NASA Astrophysics Data System (ADS)
Zhang, Xiang-Hong; Wu, Yi-Zao; Wang, Jiang-Feng
Accurate description of the aerodynamic and aerothermal environment is crucial to the integrated design and optimization for high performance hypersonic vehicles. In the simulation of aerothermal environment, the effect of viscosity is crucial. The turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating. In this paper, three turbulent models were studied: the one-equation eddy viscosity transport model of Spalart-Allmaras, the Wilcox k-ω model and the Menter SST model. For the k-ω model and SST model, the compressibility correction, press dilatation and low Reynolds number correction were considered. The influence of these corrections for flow properties were discussed by comparing with the results without corrections. In this paper the emphasis is on the assessment and evaluation of the turbulence models in prediction of heat transfer as applied to a range of hypersonic flows with comparison to experimental data. This will enable establishing factor of safety for the design of thermal protection systems of hypersonic vehicle.
Accurate Prediction of Binding Thermodynamics for DNA on Surfaces
Vainrub, Arnold; Pettitt, B. Montgomery
2011-01-01
For DNA mounted on surfaces for microarrays, microbeads and nanoparticles, the nature of the random attachment of oligonucleotide probes to an amorphous surface gives rise to a locally inhomogeneous probe density. These fluctuations of the probe surface density are inherent to all common surface or bead platforms, regardless if they exploit either an attachment of pre-synthesized probes or probes synthesized in situ on the surface. Here, we demonstrate for the first time the crucial role of the probe surface density fluctuations in performance of DNA arrays. We account for the density fluctuations with a disordered two-dimensional surface model and derive the corresponding array hybridization isotherm that includes a counter-ion screened electrostatic repulsion between the assayed DNA and probe array. The calculated melting curves are in excellent agreement with published experimental results for arrays with both pre-synthesized and in-situ synthesized oligonucleotide probes. The approach developed allows one to accurately predict the melting curves of DNA arrays using only the known sequence dependent hybridization enthalpy and entropy in solution and the experimental macroscopic surface density of probes. This opens the way to high precision theoretical design and optimization of probes and primers in widely used DNA array-based high-throughput technologies for gene expression, genotyping, next-generation sequencing, and surface polymerase extension. PMID:21972932
Accurate indel prediction using paired-end short reads
2013-01-01
Background One of the major open challenges in next generation sequencing (NGS) is the accurate identification of structural variants such as insertions and deletions (indels). Current methods for indel calling assign scores to different types of evidence or counter-evidence for the presence of an indel, such as the number of split read alignments spanning the boundaries of a deletion candidate or reads that map within a putative deletion. Candidates with a score above a manually defined threshold are then predicted to be true indels. As a consequence, structural variants detected in this manner contain many false positives. Results Here, we present a machine learning based method which is able to discover and distinguish true from false indel candidates in order to reduce the false positive rate. Our method identifies indel candidates using a discriminative classifier based on features of split read alignment profiles and trained on true and false indel candidates that were validated by Sanger sequencing. We demonstrate the usefulness of our method with paired-end Illumina reads from 80 genomes of the first phase of the 1001 Genomes Project ( http://www.1001genomes.org) in Arabidopsis thaliana. Conclusion In this work we show that indel classification is a necessary step to reduce the number of false positive candidates. We demonstrate that missing classification may lead to spurious biological interpretations. The software is available at: http://agkb.is.tuebingen.mpg.de/Forschung/SV-M/. PMID:23442375
Numerical weather prediction model tuning via ensemble prediction system
NASA Astrophysics Data System (ADS)
Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.
2011-12-01
This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.
IRIS: Towards an Accurate and Fast Stage Weight Prediction Method
NASA Astrophysics Data System (ADS)
Taponier, V.; Balu, A.
2002-01-01
The knowledge of the structural mass fraction (or the mass ratio) of a given stage, which affects the performance of a rocket, is essential for the analysis of new or upgraded launchers or stages, whose need is increased by the quick evolution of the space programs and by the necessity of their adaptation to the market needs. The availability of this highly scattered variable, ranging between 0.05 and 0.15, is of primary importance at the early steps of the preliminary design studies. At the start of the staging and performance studies, the lack of frozen weight data (to be obtained later on from propulsion, trajectory and sizing studies) leads to rely on rough estimates, generally derived from printed sources and adapted. When needed, a consolidation can be acquired trough a specific analysis activity involving several techniques and implying additional effort and time. The present empirical approach allows thus to get approximated values (i.e. not necessarily accurate or consistent), inducing some result inaccuracy as well as, consequently, difficulties of performance ranking for a multiple option analysis, and an increase of the processing duration. This forms a classical harsh fact of the preliminary design system studies, insufficiently discussed to date. It appears therefore highly desirable to have, for all the evaluation activities, a reliable, fast and easy-to-use weight or mass fraction prediction method. Additionally, the latter should allow for a pre selection of the alternative preliminary configurations, making possible a global system approach. For that purpose, an attempt at modeling has been undertaken, whose objective was the determination of a parametric formulation of the mass fraction, to be expressed from a limited number of parameters available at the early steps of the project. It is based on the innovative use of a statistical method applicable to a variable as a function of several independent parameters. A specific polynomial generator
Numerical noise prediction in fluid machinery
NASA Astrophysics Data System (ADS)
Pantle, Iris; Magagnato, Franco; Gabi, Martin
2005-09-01
Numerical methods successively became important in the design and optimization of fluid machinery. However, as noise emission is considered, one can hardly find standardized prediction methods combining flow and acoustical optimization. Several numerical field methods for sound calculations have been developed. Due to the complexity of the considered flow, approaches must be chosen to avoid exhaustive computing. In this contribution the noise of a simple propeller is investigated. The configurations of the calculations comply with an existing experimental setup chosen for evaluation. The used in-house CFD solver SPARC contains an acoustic module based on Ffowcs Williams-Hawkings Acoustic Analogy. From the flow results of the time dependent Large Eddy Simulation the time dependent acoustic sources are extracted and given to the acoustic module where relevant sound pressure levels are calculated. The difficulties, which arise while proceeding from open to closed rotors and from gas to liquid are discussed.
A numerical method for predicting hypersonic flowfields
NASA Technical Reports Server (NTRS)
Maccormack, Robert W.; Candler, Graham V.
1989-01-01
The flow about a body traveling at hypersonic speed is energetic enough to cause the atmospheric gases to chemically react and reach states in thermal nonequilibrium. The prediction of hypersonic flowfields requires a numerical method capable of solving the conservation equations of fluid flow, the chemical rate equations for specie formation and dissociation, and the transfer of energy relations between translational and vibrational temperature states. Because the number of equations to be solved is large, the numerical method should also be as efficient as possible. The proposed paper presents a fully implicit method that fully couples the solution of the fluid flow equations with the gas physics and chemistry relations. The method flux splits the inviscid flow terms, central differences of the viscous terms, preserves element conservation in the strong chemistry source terms, and solves the resulting block matrix equation by Gauss Seidel line relaxation.
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Railkar, Sudhir B.
1988-01-01
This paper represents an attempt to apply extensions of a hybrid transfinite element computational approach for accurately predicting thermoelastic stress waves. The applicability of the present formulations for capturing the thermal stress waves induced by boundary heating for the well known Danilovskaya problems is demonstrated. A unique feature of the proposed formulations for applicability to the Danilovskaya problem of thermal stress waves in elastic solids lies in the hybrid nature of the unified formulations and the development of special purpose transfinite elements in conjunction with the classical Galerkin techniques and transformation concepts. Numerical test cases validate the applicability and superior capability to capture the thermal stress waves induced due to boundary heating.
Deng, Xin; Gumm, Jordan; Karki, Suman; Eickholt, Jesse; Cheng, Jianlin
2015-01-01
Protein disordered regions are segments of a protein chain that do not adopt a stable structure. Thus far, a variety of protein disorder prediction methods have been developed and have been widely used, not only in traditional bioinformatics domains, including protein structure prediction, protein structure determination and function annotation, but also in many other biomedical fields. The relationship between intrinsically-disordered proteins and some human diseases has played a significant role in disorder prediction in disease identification and epidemiological investigations. Disordered proteins can also serve as potential targets for drug discovery with an emphasis on the disordered-to-ordered transition in the disordered binding regions, and this has led to substantial research in drug discovery or design based on protein disordered region prediction. Furthermore, protein disorder prediction has also been applied to healthcare by predicting the disease risk of mutations in patients and studying the mechanistic basis of diseases. As the applications of disorder prediction increase, so too does the need to make quick and accurate predictions. To fill this need, we also present a new approach to predict protein residue disorder using wide sequence windows that is applicable on the genomic scale. PMID:26198229
Deng, Xin; Gumm, Jordan; Karki, Suman; Eickholt, Jesse; Cheng, Jianlin
2015-01-01
Protein disordered regions are segments of a protein chain that do not adopt a stable structure. Thus far, a variety of protein disorder prediction methods have been developed and have been widely used, not only in traditional bioinformatics domains, including protein structure prediction, protein structure determination and function annotation, but also in many other biomedical fields. The relationship between intrinsically-disordered proteins and some human diseases has played a significant role in disorder prediction in disease identification and epidemiological investigations. Disordered proteins can also serve as potential targets for drug discovery with an emphasis on the disordered-to-ordered transition in the disordered binding regions, and this has led to substantial research in drug discovery or design based on protein disordered region prediction. Furthermore, protein disorder prediction has also been applied to healthcare by predicting the disease risk of mutations in patients and studying the mechanistic basis of diseases. As the applications of disorder prediction increase, so too does the need to make quick and accurate predictions. To fill this need, we also present a new approach to predict protein residue disorder using wide sequence windows that is applicable on the genomic scale. PMID:26198229
The birth of numerical weather prediction
NASA Astrophysics Data System (ADS)
Wiin-Nielsen, A.
1991-08-01
The paper describes the major events leading gradually to operational, numerical, short-range predictions for the large-scale atmospheric flow. The theoretical foundation starting with Rossby's studies of the linearized, barotropic equation and ending a decade and a half later with the general formulation of the quasi-geostrophic, baroclinic model by Charney and Phillips is described. The problems connected with the very long waves and the inconsistences of the geostrophic approximation which were major obstacles in the first experimental forecasts are discussed. The resulting changes to divergent barotropic and baroclinic models and to the use of the balance equation are described. After the discussion of the theoretical foundation, the paper describes the major developments leading to the Meteorology Project at the Institute for Advanced Studied under the leadership of John von Neumann and Jule Charney followed by the establishment of the Joint Numerical Weather Prediction Unit in Suitland, Maryland. The interconnected developments in Europe, taking place more-or-less at the same time, are described by concentrating on the activities in Stockholm where the barotropic model was used in many experiments leading also to operational forecasts. The further developments resulting in the use of the primitive equations and the formulation of medium-range forecasting models are not included in the paper.
The birth of numerical weather prediction
NASA Astrophysics Data System (ADS)
Wiin-Nielsen, A.
1991-09-01
The paper describes the major events leading gradually to operational, numerical, short-range predictions for the large-scale atmospheric flow. The theoretical foundation starting with Rossby's studies of the linearized, barotropic equation and ending a decade and a half later with the general formulation of the quasi-geostrophic, baroclinic model by Charney and Phillips is described. The problems connected with the very long waves and the inconsistences of the geostrophic approximation which were major obstacles in the first experimental forecasts are discussed. The resulting changes to divergent barotropic and baroclinic models and to the use of the balance equation are described. After the discussion of the theoretical foundation, the paper describes the major developments leading to the Meteorology Project at the Institute for Advanced Studied under the leadership of John von Neumann and Jule Charney followed by the establishment of the Joint Numerical Weather Prediction Unit in Suitland, Maryland. The inter-connected developments in Europe, taking place more-or-less at the same time, are described by concentrating on the activities in Stockholm where the barotropic model was used in many experiments leading also to operational forecasts. The further developments resulting in the use of the primitive equations and the formulation of medium-range forecasting models are not included in the paper.
Accurately Predicting Complex Reaction Kinetics from First Principles
NASA Astrophysics Data System (ADS)
Green, William
Many important systems contain a multitude of reactive chemical species, some of which react on a timescale faster than collisional thermalization, i.e. they never achieve a Boltzmann energy distribution. Usually it is impossible to fully elucidate the processes by experiments alone. Here we report recent progress toward predicting the time-evolving composition of these systems a priori: how unexpected reactions can be discovered on the computer, how reaction rates are computed from first principles, and how the many individual reactions are efficiently combined into a predictive simulation for the whole system. Some experimental tests of the a priori predictions are also presented.
Is Three-Dimensional Soft Tissue Prediction by Software Accurate?
Nam, Ki-Uk; Hong, Jongrak
2015-11-01
The authors assessed whether virtual surgery, performed with a soft tissue prediction program, could correctly simulate the actual surgical outcome, focusing on soft tissue movement. Preoperative and postoperative computed tomography (CT) data for 29 patients, who had undergone orthognathic surgery, were obtained and analyzed using the Simplant Pro software. The program made a predicted soft tissue image (A) based on presurgical CT data. After the operation, we obtained actual postoperative CT data and an actual soft tissue image (B) was generated. Finally, the 2 images (A and B) were superimposed and analyzed differences between the A and B. Results were grouped in 2 classes: absolute values and vector values. In the absolute values, the left mouth corner was the most significant error point (2.36 mm). The right mouth corner (2.28 mm), labrale inferius (2.08 mm), and the pogonion (2.03 mm) also had significant errors. In vector values, prediction of the right-left side had a left-sided tendency, the superior-inferior had a superior tendency, and the anterior-posterior showed an anterior tendency. As a result, with this program, the position of points tended to be located more left, anterior, and superior than the "real" situation. There is a need to improve the prediction accuracy for soft tissue images. Such software is particularly valuable in predicting craniofacial soft tissues landmarks, such as the pronasale. With this software, landmark positions were most inaccurate in terms of anterior-posterior predictions. PMID:26594988
Accurate perception of negative emotions predicts functional capacity in schizophrenia.
Abram, Samantha V; Karpouzian, Tatiana M; Reilly, James L; Derntl, Birgit; Habel, Ute; Smith, Matthew J
2014-04-30
Several studies suggest facial affect perception (FAP) deficits in schizophrenia are linked to poorer social functioning. However, whether reduced functioning is associated with inaccurate perception of specific emotional valence or a global FAP impairment remains unclear. The present study examined whether impairment in the perception of specific emotional valences (positive, negative) and neutrality were uniquely associated with social functioning, using a multimodal social functioning battery. A sample of 59 individuals with schizophrenia and 41 controls completed a computerized FAP task, and measures of functional capacity, social competence, and social attainment. Participants also underwent neuropsychological testing and symptom assessment. Regression analyses revealed that only accurately perceiving negative emotions explained significant variance (7.9%) in functional capacity after accounting for neurocognitive function and symptoms. Partial correlations indicated that accurately perceiving anger, in particular, was positively correlated with functional capacity. FAP for positive, negative, or neutral emotions were not related to social competence or social attainment. Our findings were consistent with prior literature suggesting negative emotions are related to functional capacity in schizophrenia. Furthermore, the observed relationship between perceiving anger and performance of everyday living skills is novel and warrants further exploration. PMID:24524947
Accurate Critical Stress Intensity Factor Griffith Crack Theory Measurements by Numerical Techniques
Petersen, Richard C.
2014-01-01
Critical stress intensity factor (KIc) has been an approximation for fracture toughness using only load-cell measurements. However, artificial man-made cracks several orders of magnitude longer and wider than natural flaws have required a correction factor term (Y) that can be up to about 3 times the recorded experimental value [1-3]. In fact, over 30 years ago a National Academy of Sciences advisory board stated that empirical KIc testing was of serious concern and further requested that an accurate bulk fracture toughness method be found [4]. Now that fracture toughness can be calculated accurately by numerical integration from the load/deflection curve as resilience, work of fracture (WOF) and strain energy release (SIc) [5, 6], KIc appears to be unnecessary. However, the large body of previous KIc experimental test results found in the literature offer the opportunity for continued meta analysis with other more practical and accurate fracture toughness results using energy methods and numerical integration. Therefore, KIc is derived from the classical Griffith Crack Theory [6] to include SIc as a more accurate term for strain energy release rate (𝒢Ic), along with crack surface energy (γ), crack length (a), modulus (E), applied stress (σ), Y, crack-tip plastic zone defect region (rp) and yield strength (σys) that can all be determined from load and deflection data. Polymer matrix discontinuous quartz fiber-reinforced composites to accentuate toughness differences were prepared for flexural mechanical testing comprising of 3 mm fibers at different volume percentages from 0-54.0 vol% and at 28.2 vol% with different fiber lengths from 0.0-6.0 mm. Results provided a new correction factor and regression analyses between several numerical integration fracture toughness test methods to support KIc results. Further, bulk KIc accurate experimental values are compared with empirical test results found in literature. Also, several fracture toughness mechanisms
Towards Accurate Ab Initio Predictions of the Spectrum of Methane
NASA Technical Reports Server (NTRS)
Schwenke, David W.; Kwak, Dochan (Technical Monitor)
2001-01-01
We have carried out extensive ab initio calculations of the electronic structure of methane, and these results are used to compute vibrational energy levels. We include basis set extrapolations, core-valence correlation, relativistic effects, and Born- Oppenheimer breakdown terms in our calculations. Our ab initio predictions of the lowest lying levels are superb.
Standardized EEG interpretation accurately predicts prognosis after cardiac arrest
Rossetti, Andrea O.; van Rootselaar, Anne-Fleur; Wesenberg Kjaer, Troels; Horn, Janneke; Ullén, Susann; Friberg, Hans; Nielsen, Niklas; Rosén, Ingmar; Åneman, Anders; Erlinge, David; Gasche, Yvan; Hassager, Christian; Hovdenes, Jan; Kjaergaard, Jesper; Kuiper, Michael; Pellis, Tommaso; Stammet, Pascal; Wanscher, Michael; Wetterslev, Jørn; Wise, Matt P.; Cronberg, Tobias
2016-01-01
Objective: To identify reliable predictors of outcome in comatose patients after cardiac arrest using a single routine EEG and standardized interpretation according to the terminology proposed by the American Clinical Neurophysiology Society. Methods: In this cohort study, 4 EEG specialists, blinded to outcome, evaluated prospectively recorded EEGs in the Target Temperature Management trial (TTM trial) that randomized patients to 33°C vs 36°C. Routine EEG was performed in patients still comatose after rewarming. EEGs were classified into highly malignant (suppression, suppression with periodic discharges, burst-suppression), malignant (periodic or rhythmic patterns, pathological or nonreactive background), and benign EEG (absence of malignant features). Poor outcome was defined as best Cerebral Performance Category score 3–5 until 180 days. Results: Eight TTM sites randomized 202 patients. EEGs were recorded in 103 patients at a median 77 hours after cardiac arrest; 37% had a highly malignant EEG and all had a poor outcome (specificity 100%, sensitivity 50%). Any malignant EEG feature had a low specificity to predict poor prognosis (48%) but if 2 malignant EEG features were present specificity increased to 96% (p < 0.001). Specificity and sensitivity were not significantly affected by targeted temperature or sedation. A benign EEG was found in 1% of the patients with a poor outcome. Conclusions: Highly malignant EEG after rewarming reliably predicted poor outcome in half of patients without false predictions. An isolated finding of a single malignant feature did not predict poor outcome whereas a benign EEG was highly predictive of a good outcome. PMID:26865516
PredictSNP: Robust and Accurate Consensus Classifier for Prediction of Disease-Related Mutations
Bendl, Jaroslav; Stourac, Jan; Salanda, Ondrej; Pavelka, Antonin; Wieben, Eric D.; Zendulka, Jaroslav; Brezovsky, Jan; Damborsky, Jiri
2014-01-01
Single nucleotide variants represent a prevalent form of genetic variation. Mutations in the coding regions are frequently associated with the development of various genetic diseases. Computational tools for the prediction of the effects of mutations on protein function are very important for analysis of single nucleotide variants and their prioritization for experimental characterization. Many computational tools are already widely employed for this purpose. Unfortunately, their comparison and further improvement is hindered by large overlaps between the training datasets and benchmark datasets, which lead to biased and overly optimistic reported performances. In this study, we have constructed three independent datasets by removing all duplicities, inconsistencies and mutations previously used in the training of evaluated tools. The benchmark dataset containing over 43,000 mutations was employed for the unbiased evaluation of eight established prediction tools: MAPP, nsSNPAnalyzer, PANTHER, PhD-SNP, PolyPhen-1, PolyPhen-2, SIFT and SNAP. The six best performing tools were combined into a consensus classifier PredictSNP, resulting into significantly improved prediction performance, and at the same time returned results for all mutations, confirming that consensus prediction represents an accurate and robust alternative to the predictions delivered by individual tools. A user-friendly web interface enables easy access to all eight prediction tools, the consensus classifier PredictSNP and annotations from the Protein Mutant Database and the UniProt database. The web server and the datasets are freely available to the academic community at http://loschmidt.chemi.muni.cz/predictsnp. PMID:24453961
How Accurately Can We Predict Eclipses for Algol? (Poster abstract)
NASA Astrophysics Data System (ADS)
Turner, D.
2016-06-01
(Abstract only) beta Persei, or Algol, is a very well known eclipsing binary system consisting of a late B-type dwarf that is regularly eclipsed by a GK subgiant every 2.867 days. Eclipses, which last about 8 hours, are regular enough that predictions for times of minima are published in various places, Sky & Telescope magazine and The Observer's Handbook, for example. But eclipse minimum lasts for less than a half hour, whereas subtle mistakes in the current ephemeris for the star can result in predictions that are off by a few hours or more. The Algol system is fairly complex, with the Algol A and Algol B eclipsing system also orbited by Algol C with an orbital period of nearly 2 years. Added to that are complex long-term O-C variations with a periodicity of almost two centuries that, although suggested by Hoffmeister to be spurious, fit the type of light travel time variations expected for a fourth star also belonging to the system. The AB sub-system also undergoes mass transfer events that add complexities to its O-C behavior. Is it actually possible to predict precise times of eclipse minima for Algol months in advance given such complications, or is it better to encourage ongoing observations of the star so that O-C variations can be tracked in real time?
High Order Schemes in Bats-R-US for Faster and More Accurate Predictions
NASA Astrophysics Data System (ADS)
Chen, Y.; Toth, G.; Gombosi, T. I.
2014-12-01
BATS-R-US is a widely used global magnetohydrodynamics model that originally employed second order accurate TVD schemes combined with block based Adaptive Mesh Refinement (AMR) to achieve high resolution in the regions of interest. In the last years we have implemented fifth order accurate finite difference schemes CWENO5 and MP5 for uniform Cartesian grids. Now the high order schemes have been extended to generalized coordinates, including spherical grids and also to the non-uniform AMR grids including dynamic regridding. We present numerical tests that verify the preservation of free-stream solution and high-order accuracy as well as robust oscillation-free behavior near discontinuities. We apply the new high order accurate schemes to both heliospheric and magnetospheric simulations and show that it is robust and can achieve the same accuracy as the second order scheme with much less computational resources. This is especially important for space weather prediction that requires faster than real time code execution.
Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting
Khan, Tarik A.; Friedensohn, Simon; de Vries, Arthur R. Gorter; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T.
2016-01-01
High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion—the intraclonal diversity index—which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology. PMID:26998518
Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting.
Khan, Tarik A; Friedensohn, Simon; Gorter de Vries, Arthur R; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T
2016-03-01
High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion-the intraclonal diversity index-which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology. PMID:26998518
NASA Technical Reports Server (NTRS)
VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R,; Hathaway, Michael D.; Okiishi, Theodore H.
2000-01-01
The tip clearance flows of transonic compressor rotors are important because they have a significant impact on rotor and stage performance. While numerical simulations of these flows are quite sophisticated. they are seldom verified through rigorous comparisons of numerical and measured data because these kinds of measurements are rare in the detail necessary to be useful in high-speed machines. In this paper we compare measured tip clearance flow details (e.g. trajectory and radial extent) with corresponding data obtained from a numerical simulation. Recommendations for achieving accurate numerical simulation of tip clearance flows are presented based on this comparison. Laser Doppler Velocimeter (LDV) measurements acquired in a transonic compressor rotor, NASA Rotor 35, are used. The tip clearance flow field of this transonic rotor was simulated using a Navier-Stokes turbomachinery solver that incorporates an advanced k-epsilon turbulence model derived for flows that are not in local equilibrium. Comparison between measured and simulated results indicates that simulation accuracy is primarily dependent upon the ability of the numerical code to resolve important details of a wall-bounded shear layer formed by the relative motion between the over-tip leakage flow and the shroud wall. A simple method is presented for determining the strength of this shear layer.
Accurate predictions for the production of vaporized water
Morin, E.; Montel, F.
1995-12-31
The production of water vaporized in the gas phase is controlled by the local conditions around the wellbore. The pressure gradient applied to the formation creates a sharp increase of the molar water content in the hydrocarbon phase approaching the well; this leads to a drop in the pore water saturation around the wellbore. The extent of the dehydrated zone which is formed is the key controlling the bottom-hole content of vaporized water. The maximum water content in the hydrocarbon phase at a given pressure, temperature and salinity is corrected by capillarity or adsorption phenomena depending on the actual water saturation. Describing the mass transfer of the water between the hydrocarbon phases and the aqueous phase into the tubing gives a clear idea of vaporization effects on the formation of scales. Field example are presented for gas fields with temperatures ranging between 140{degrees}C and 180{degrees}C, where water vaporization effects are significant. Conditions for salt plugging in the tubing are predicted.
Change in BMI Accurately Predicted by Social Exposure to Acquaintances
Oloritun, Rahman O.; Ouarda, Taha B. M. J.; Moturu, Sai; Madan, Anmol; Pentland, Alex (Sandy); Khayal, Inas
2013-01-01
Research has mostly focused on obesity and not on processes of BMI change more generally, although these may be key factors that lead to obesity. Studies have suggested that obesity is affected by social ties. However these studies used survey based data collection techniques that may be biased toward select only close friends and relatives. In this study, mobile phone sensing techniques were used to routinely capture social interaction data in an undergraduate dorm. By automating the capture of social interaction data, the limitations of self-reported social exposure data are avoided. This study attempts to understand and develop a model that best describes the change in BMI using social interaction data. We evaluated a cohort of 42 college students in a co-located university dorm, automatically captured via mobile phones and survey based health-related information. We determined the most predictive variables for change in BMI using the least absolute shrinkage and selection operator (LASSO) method. The selected variables, with gender, healthy diet category, and ability to manage stress, were used to build multiple linear regression models that estimate the effect of exposure and individual factors on change in BMI. We identified the best model using Akaike Information Criterion (AIC) and R2. This study found a model that explains 68% (p<0.0001) of the variation in change in BMI. The model combined social interaction data, especially from acquaintances, and personal health-related information to explain change in BMI. This is the first study taking into account both interactions with different levels of social interaction and personal health-related information. Social interactions with acquaintances accounted for more than half the variation in change in BMI. This suggests the importance of not only individual health information but also the significance of social interactions with people we are exposed to, even people we may not consider as close friends. PMID
NASA Astrophysics Data System (ADS)
Wosnik, M.; Bachant, P.
2014-12-01
Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of
Seth A Veitzer
2008-10-21
Effects of stray electrons are a main factor limiting performance of many accelerators. Because heavy-ion fusion (HIF) accelerators will operate in regimes of higher current and with walls much closer to the beam than accelerators operating today, stray electrons might have a large, detrimental effect on the performance of an HIF accelerator. A primary source of stray electrons is electrons generated when halo ions strike the beam pipe walls. There is some research on these types of secondary electrons for the HIF community to draw upon, but this work is missing one crucial ingredient: the effect of grazing incidence. The overall goal of this project was to develop the numerical tools necessary to accurately model the effect of grazing incidence on the behavior of halo ions in a HIF accelerator, and further, to provide accurate models of heavy ion stopping powers with applications to ICF, WDM, and HEDP experiments.
Numerical prediction of freezing fronts in cryosurgery: comparison with experimental results.
Fortin, André; Belhamadia, Youssef
2005-08-01
Recent developments in scientific computing now allow to consider realistic applications of numerical modelling to medicine. In this work, a numerical method is presented for the simulation of phase change occurring in cryosurgery applications. The ultimate goal of these simulations is to accurately predict the freezing front position and the thermal history inside the ice ball which is essential to determine if cancerous cells have been completely destroyed. A semi-phase field formulation including blood flow considerations is employed for the simulations. Numerical results are enhanced by the introduction of an anisotropic remeshing strategy. The numerical procedure is validated by comparing the predictions of the model with experimental results. PMID:16298846
Efficient and accurate numerical methods for the Klein-Gordon-Schroedinger equations
Bao, Weizhu . E-mail: bao@math.nus.edu.sg; Yang, Li . E-mail: yangli@nus.edu.sg
2007-08-10
In this paper, we present efficient, unconditionally stable and accurate numerical methods for approximations of the Klein-Gordon-Schroedinger (KGS) equations with/without damping terms. The key features of our methods are based on: (i) the application of a time-splitting spectral discretization for a Schroedinger-type equation in KGS (ii) the utilization of Fourier pseudospectral discretization for spatial derivatives in the Klein-Gordon equation in KGS (iii) the adoption of solving the ordinary differential equations (ODEs) in phase space analytically under appropriate chosen transmission conditions between different time intervals or applying Crank-Nicolson/leap-frog for linear/nonlinear terms for time derivatives. The numerical methods are either explicit or implicit but can be solved explicitly, unconditionally stable, and of spectral accuracy in space and second-order accuracy in time. Moreover, they are time reversible and time transverse invariant when there is no damping terms in KGS, conserve (or keep the same decay rate of) the wave energy as that in KGS without (or with a linear) damping term, keep the same dynamics of the mean value of the meson field, and give exact results for the plane-wave solution. Extensive numerical tests are presented to confirm the above properties of our numerical methods for KGS. Finally, the methods are applied to study solitary-wave collisions in one dimension (1D), as well as dynamics of a 2D problem in KGS.
NUMERICAL MODELS FOR PREDICTING WATERSHED ACIDIFICATION
Three numerical models of watershed acidification, including the MAGIC II, ETD, and ILWAS models, are reviewed, and a comparative study is made of the specific process formulations that are incorporated in the models to represent hydrological, geochemical, and biogeochemical proc...
Sub-kilometer Numerical Weather Prediction in complex urban areas
NASA Astrophysics Data System (ADS)
Leroyer, S.; Bélair, S.; Husain, S.; Vionnet, V.
2013-12-01
A Sub-kilometer atmospheric modeling system with grid-spacings of 2.5 km, 1 km and 250 m and including urban processes is currently being developed at the Meteorological Service of Canada (MSC) in order to provide more accurate weather forecasts at the city scale. Atmospheric lateral boundary conditions are provided with the 15-km Canadian Regional Deterministic Prediction System (RDPS). Surface physical processes are represented with the Town Energy Balance (TEB) model for the built-up covers and with the Interactions between the Surface, Biosphere, and Atmosphere (ISBA) land surface model for the natural covers. In this study, several research experiments over large metropolitan areas and using observational networks at the urban scale are presented, with a special emphasis on the representation of local atmospheric circulations and their impact on extreme weather forecasting. First, numerical simulations are performed over the Vancouver metropolitan area during a summertime Intense Observing Period (IOP of 14-15 August 2008) of the Environmental Prediction in Canadian Cities (EPiCC) observational network. The influence of the horizontal resolution on the fine-scale representation of the sea-breeze development over the city is highlighted (Leroyer et al., 2013). Then severe storms cases occurring in summertime within the Greater Toronto Area (GTA) are simulated. In view of supporting the 2015 PanAmerican and Para-Pan games to be hold in GTA, a dense observational network has been recently deployed over this region to support model evaluations at the urban and meso scales. In particular, simulations are conducted for the case of 8 July 2013 when exceptional rainfalls were recorded. Leroyer, S., S. Bélair, J. Mailhot, S.Z. Husain, 2013: Sub-kilometer Numerical Weather Prediction in an Urban Coastal Area: A case study over the Vancouver Metropolitan Area, submitted to Journal of Applied Meteorology and Climatology.
NASA Astrophysics Data System (ADS)
Garrison, Stephen L.
2005-07-01
The combination of molecular simulations and potentials obtained from quantum chemistry is shown to be able to provide reasonably accurate thermodynamic property predictions. Gibbs ensemble Monte Carlo simulations are used to understand the effects of small perturbations to various regions of the model Lennard-Jones 12-6 potential. However, when the phase behavior and second virial coefficient are scaled by the critical properties calculated for each potential, the results obey a corresponding states relation suggesting a non-uniqueness problem for interaction potentials fit to experimental phase behavior. Several variations of a procedure collectively referred to as quantum mechanical Hybrid Methods for Interaction Energies (HM-IE) are developed and used to accurately estimate interaction energies from CCSD(T) calculations with a large basis set in a computationally efficient manner for the neon-neon, acetylene-acetylene, and nitrogen-benzene systems. Using these results and methods, an ab initio, pairwise-additive, site-site potential for acetylene is determined and then improved using results from molecular simulations using this initial potential. The initial simulation results also indicate that a limited range of energies important for accurate phase behavior predictions. Second virial coefficients calculated from the improved potential indicate that one set of experimental data in the literature is likely erroneous. This prescription is then applied to methanethiol. Difficulties in modeling the effects of the lone pair electrons suggest that charges on the lone pair sites negatively impact the ability of the intermolecular potential to describe certain orientations, but that the lone pair sites may be necessary to reasonably duplicate the interaction energies for several orientations. Two possible methods for incorporating the effects of three-body interactions into simulations within the pairwise-additivity formulation are also developed. A low density
Numerical prediction of turbulent oscillating flow and associated heat transfer
NASA Astrophysics Data System (ADS)
Koehler, W. J.; Patankar, S. V.; Ibele, W. E.
1991-08-01
A crucial point for further development of engines is the optimization of its heat exchangers which operate under oscillatory flow conditions. It has been found that the most important thermodynamic uncertainties in the Stirling engine designs for space power are in the heat transfer between gas and metal in all engine components and in the pressure drop across the heat exchanger components. So far, performance codes cannot predict the power output of a Stirling engine reasonably enough if used for a wide variety of engines. Thus, there is a strong need for better performance codes. However, a performance code is not concerned with the details of the flow. This information must be provided externally. While analytical relationships exist for laminar oscillating flow, there has been hardly any information about transitional and turbulent oscillating flow, which could be introduced into the performance codes. In 1986, a survey by Seume and Simon revealed that most Stirling engine heat exchangers operate in the transitional and turbulent regime. Consequently, research has since focused on the unresolved issue of transitional and turbulent oscillating flow and heat transfer. Since 1988, the University of Minnesota oscillating flow facility has obtained experimental data about transitional and turbulent oscillating flow. However, since the experiments in this field are extremely difficult, lengthy, and expensive, it is advantageous to numerically simulate the flow and heat transfer accurately from first principles. Work done at the University of Minnesota on the development of such a numerical simulation is summarized.
Numerical prediction of turbulent oscillating flow and associated heat transfer
NASA Technical Reports Server (NTRS)
Koehler, W. J.; Patankar, S. V.; Ibele, W. E.
1991-01-01
A crucial point for further development of engines is the optimization of its heat exchangers which operate under oscillatory flow conditions. It has been found that the most important thermodynamic uncertainties in the Stirling engine designs for space power are in the heat transfer between gas and metal in all engine components and in the pressure drop across the heat exchanger components. So far, performance codes cannot predict the power output of a Stirling engine reasonably enough if used for a wide variety of engines. Thus, there is a strong need for better performance codes. However, a performance code is not concerned with the details of the flow. This information must be provided externally. While analytical relationships exist for laminar oscillating flow, there has been hardly any information about transitional and turbulent oscillating flow, which could be introduced into the performance codes. In 1986, a survey by Seume and Simon revealed that most Stirling engine heat exchangers operate in the transitional and turbulent regime. Consequently, research has since focused on the unresolved issue of transitional and turbulent oscillating flow and heat transfer. Since 1988, the University of Minnesota oscillating flow facility has obtained experimental data about transitional and turbulent oscillating flow. However, since the experiments in this field are extremely difficult, lengthy, and expensive, it is advantageous to numerically simulate the flow and heat transfer accurately from first principles. Work done at the University of Minnesota on the development of such a numerical simulation is summarized.
Takahashi, F; Endo, A
2007-01-01
A system utilising radiation transport codes has been developed to derive accurate dose distributions in a human body for radiological accidents. A suitable model is quite essential for a numerical analysis. Therefore, two tools were developed to setup a 'problem-dependent' input file, defining a radiation source and an exposed person to simulate the radiation transport in an accident with the Monte Carlo calculation codes-MCNP and MCNPX. Necessary resources are defined by a dialogue method with a generally used personal computer for both the tools. The tools prepare human body and source models described in the input file format of the employed Monte Carlo codes. The tools were validated for dose assessment in comparison with a past criticality accident and a hypothesized exposure. PMID:17510203
A novel numerical technique to obtain an accurate solution to the Thomas-Fermi equation
NASA Astrophysics Data System (ADS)
Parand, Kourosh; Yousefi, Hossein; Delkhosh, Mehdi; Ghaderi, Amin
2016-07-01
In this paper, a new algorithm based on the fractional order of rational Euler functions (FRE) is introduced to study the Thomas-Fermi (TF) model which is a nonlinear singular ordinary differential equation on a semi-infinite interval. This problem, using the quasilinearization method (QLM), converts to the sequence of linear ordinary differential equations to obtain the solution. For the first time, the rational Euler (RE) and the FRE have been made based on Euler polynomials. In addition, the equation will be solved on a semi-infinite domain without truncating it to a finite domain by taking FRE as basic functions for the collocation method. This method reduces the solution of this problem to the solution of a system of algebraic equations. We demonstrated that the new proposed algorithm is efficient for obtaining the value of y'(0) , y(x) and y'(x) . Comparison with some numerical and analytical solutions shows that the present solution is highly accurate.
Recommendations for accurate numerical blood flow simulations of stented intracranial aneurysms.
Janiga, Gábor; Berg, Philipp; Beuing, Oliver; Neugebauer, Mathias; Gasteiger, Rocco; Preim, Bernhard; Rose, Georg; Skalej, Martin; Thévenin, Dominique
2013-06-01
The number of scientific publications dealing with stented intracranial aneurysms is rapidly increasing. Powerful computational facilities are now available; an accurate computational modeling of hemodynamics in patient-specific configurations is, however, still being sought. Furthermore, there is still no general agreement on the quantities that should be computed and on the most adequate analysis for intervention support. In this article, the accurate representation of patient geometry is first discussed, involving successive improvements. Concerning the second step, the mesh required for the numerical simulation is especially challenging when deploying a stent with very fine wire structures. Third, the description of the fluid properties is a major challenge. Finally, a founded quantitative analysis of the simulation results is obviously needed to support interventional decisions. In the present work, an attempt has been made to review the most important steps for a high-quality computational fluid dynamics computation of virtually stented intracranial aneurysms. In consequence, this leads to concrete recommendations, whereby the obtained results are not discussed for their medical relevance but for the evaluation of their quality. This investigation might hopefully be helpful for further studies considering stent deployment in patient-specific geometries, in particular regarding the generation of the most appropriate computational model. PMID:23729530
PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release
NASA Astrophysics Data System (ADS)
Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.
2016-09-01
The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.
Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods
NASA Astrophysics Data System (ADS)
Kozdon, J. E.; Wilcox, L.
2013-12-01
Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.
Numerical simulation for fan broadband noise prediction
NASA Astrophysics Data System (ADS)
Hase, Takaaki; Yamasaki, Nobuhiko; Ooishi, Tsutomu
2011-03-01
In order to elucidate the broadband noise of fan, the numerical simulation of fan operating at two different rotational speeds is carried out using the three-dimensional unsteady Reynolds-averaged Navier-Stokes (URANS) equations. The computed results are compared to experiment to estimate its accuracy and are found to show good agreement with experiment. A method is proposed to evaluate the turbulent kinetic energy in the framework of the Spalart-Allmaras one equation turbulence model. From the calculation results, the turbulent kinetic energy is visualized as the turbulence of the flow which leads to generate the broadband noise, and its noise sources are identified.
NASA Astrophysics Data System (ADS)
McNamara, Roger P.; Eagle, C. D.
1992-08-01
Planetary Observer High Accuracy Orbit Prediction Program (POHOP), an existing numerical integrator, was modified with the solar and lunar formulae developed by T.C. Van Flandern and K.F. Pulkkinen to provide the accuracy required to evaluate long-term orbit characteristics of objects on the geosynchronous region. The orbit of a 1000 kg class spacecraft is numerically integrated over 50 years using both the original and the more accurate solar and lunar ephemerides methods. Results of this study demonstrate that, over the long term, for an object located in the geosynchronous region, the more accurate solar and lunar ephemerides effects on the objects's position are significantly different than using the current POHOP ephemeris.
Numerical Simulation of the 2004 Indian Ocean Tsunami: Accurate Flooding and drying in Banda Aceh
NASA Astrophysics Data System (ADS)
Cui, Haiyang; Pietrzak, Julie; Stelling, Guus; Androsov, Alexey; Harig, Sven
2010-05-01
The Indian Ocean Tsunami on December 26, 2004 caused one of the largest tsunamis in recent times and led to widespread devastation and loss of life. One of the worst hit regions was Banda Aceh, which is the capital of the Aceh province, located in the northern part of Sumatra, 150km from the source of the earthquake. A German-Indonesian Tsunami Early Warning System (GITEWS) (www.gitews.de) is currently under active development. The work presented here is carried out within the GITEWS framework. One of the aims of this project is the development of accurate models with which to simulate the propagation, flooding and drying, and run-up of a tsunami. In this context, TsunAWI has been developed by the Alfred Wegener Institute; it is an explicit, () finite element model. However, the accurate numerical simulation of flooding and drying requires the conservation of mass and momentum. This is not possible in the current version of TsunAWi. The P1NC - P1element guarantees mass conservation in a global sense, yet as we show here it is important to guarantee mass conservation at the local level, that is within each individual cell. Here an unstructured grid, finite volume ocean model is presented. It is derived from the P1NC - P1 element, and is shown to be mass and momentum conserving. Then a number of simulations are presented, including dam break problems flooding over both a wet and a dry bed. Excellent agreement is found. Then we present simulations for Banda Aceh, and compare the results to on-site survey data, as well as to results from the original TsunAWI code.
A high order accurate finite element algorithm for high Reynolds number flow prediction
NASA Technical Reports Server (NTRS)
Baker, A. J.
1978-01-01
A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.
Energy expenditure during level human walking: seeking a simple and accurate predictive solution.
Ludlow, Lindsay W; Weyand, Peter G
2016-03-01
Accurate prediction of the metabolic energy that walking requires can inform numerous health, bodily status, and fitness outcomes. We adopted a two-step approach to identifying a concise, generalized equation for predicting level human walking metabolism. Using literature-aggregated values we compared 1) the predictive accuracy of three literature equations: American College of Sports Medicine (ACSM), Pandolf et al., and Height-Weight-Speed (HWS); and 2) the goodness-of-fit possible from one- vs. two-component descriptions of walking metabolism. Literature metabolic rate values (n = 127; speed range = 0.4 to 1.9 m/s) were aggregated from 25 subject populations (n = 5-42) whose means spanned a 1.8-fold range of heights and a 4.2-fold range of weights. Population-specific resting metabolic rates (V̇o2 rest) were determined using standardized equations. Our first finding was that the ACSM and Pandolf et al. equations underpredicted nearly all 127 literature-aggregated values. Consequently, their standard errors of estimate (SEE) were nearly four times greater than those of the HWS equation (4.51 and 4.39 vs. 1.13 ml O2·kg(-1)·min(-1), respectively). For our second comparison, empirical best-fit relationships for walking metabolism were derived from the data set in one- and two-component forms for three V̇o2-speed model types: linear (∝V(1.0)), exponential (∝V(2.0)), and exponential/height (∝V(2.0)/Ht). We found that the proportion of variance (R(2)) accounted for, when averaged across the three model types, was substantially lower for one- vs. two-component versions (0.63 ± 0.1 vs. 0.90 ± 0.03) and the predictive errors were nearly twice as great (SEE = 2.22 vs. 1.21 ml O2·kg(-1)·min(-1)). Our final analysis identified the following concise, generalized equation for predicting level human walking metabolism: V̇o2 total = V̇o2 rest + 3.85 + 5.97·V(2)/Ht (where V is measured in m/s, Ht in meters, and V̇o2 in ml O2·kg(-1)·min(-1)). PMID:26679617
Numerical prediction of axial turbine stage aerodynamics
NASA Technical Reports Server (NTRS)
Mcconnaughey, H. V.; Griffin, L. W.
1990-01-01
A preliminary assessment is made of two NASA-developed unsteady turbine stage computer codes. The methodology and previous partial validation of the codes are briefly outlined. Application of these codes to a Space Shuttle main engine turbine for two sets of operating conditions is then described. Steady and unsteady, two and three-dimensional results are presented, compared, and discussed. These results include time-mean and instantaneous airfoil pressure distributions and pressure fluctuations, streamlines on the airfoil surfaces and endwalls, and relative total pressure contours at different axial locations in the rotor passage. Although not available at the time of this writing, experimental data for one of the operating conditions simulated is forthcoming and will be used to assess the accuracy of the unsteady, as well as, the steady predictions presented. Issues related to code usage and resource requirements of the two codes are also discussed.
Kanyanta, V; Ivankovic, A; Karac, A
2009-08-01
Fluid-structure interaction (FSI) numerical models are now widely used in predicting blood flow transients. This is because of the importance of the interaction between the flowing blood and the deforming arterial wall to blood flow behaviour. Unfortunately, most of these FSI models lack rigorous validation and, thus, cannot guarantee the accuracy of their predictions. This paper presents the comprehensive validation of a two-way coupled FSI numerical model, developed to predict flow transients in compliant conduits such as arteries. The model is validated using analytical solutions and experiments conducted on polyurethane mock artery. Flow parameters such as pressure and axial stress (and precursor) wave speeds, wall deformations and oscillating frequency, fluid velocity and Poisson coupling effects, were used as the basis of this validation. Results show very good comparison between numerical predictions, analytical solutions and experimental data. The agreement between the three approaches is generally over 95%. The model also shows accurate prediction of Poisson coupling effects in unsteady flows through flexible pipes, which up to this stage have only being predicted analytically. Therefore, this numerical model can accurately predict flow transients in compliant vessels such as arteries. PMID:19482285
Spray combustion experiments and numerical predictions
NASA Technical Reports Server (NTRS)
Mularz, Edward J.; Bulzan, Daniel L.; Chen, Kuo-Huey
1993-01-01
The next generation of commercial aircraft will include turbofan engines with performance significantly better than those in the current fleet. Control of particulate and gaseous emissions will also be an integral part of the engine design criteria. These performance and emission requirements present a technical challenge for the combustor: control of the fuel and air mixing and control of the local stoichiometry will have to be maintained much more rigorously than with combustors in current production. A better understanding of the flow physics of liquid fuel spray combustion is necessary. This paper describes recent experiments on spray combustion where detailed measurements of the spray characteristics were made, including local drop-size distributions and velocities. Also, an advanced combustor CFD code has been under development and predictions from this code are compared with experimental results. Studies such as these will provide information to the advanced combustor designer on fuel spray quality and mixing effectiveness. Validation of new fast, robust, and efficient CFD codes will also enable the combustor designer to use them as additional design tools for optimization of combustor concepts for the next generation of aircraft engines.
Orbital Advection by Interpolation: A Fast and Accurate Numerical Scheme for Super-Fast MHD Flows
Johnson, B M; Guan, X; Gammie, F
2008-04-11
In numerical models of thin astrophysical disks that use an Eulerian scheme, gas orbits supersonically through a fixed grid. As a result the timestep is sharply limited by the Courant condition. Also, because the mean flow speed with respect to the grid varies with position, the truncation error varies systematically with position. For hydrodynamic (unmagnetized) disks an algorithm called FARGO has been developed that advects the gas along its mean orbit using a separate interpolation substep. This relaxes the constraint imposed by the Courant condition, which now depends only on the peculiar velocity of the gas, and results in a truncation error that is more nearly independent of position. This paper describes a FARGO-like algorithm suitable for evolving magnetized disks. Our method is second order accurate on a smooth flow and preserves {del} {center_dot} B = 0 to machine precision. The main restriction is that B must be discretized on a staggered mesh. We give a detailed description of an implementation of the code and demonstrate that it produces the expected results on linear and nonlinear problems. We also point out how the scheme might be generalized to make the integration of other supersonic/super-fast flows more efficient. Although our scheme reduces the variation of truncation error with position, it does not eliminate it. We show that the residual position dependence leads to characteristic radial variations in the density over long integrations.
Meek, Garrett A; Levine, Benjamin G
2014-07-01
Spikes in the time-derivative coupling (TDC) near surface crossings make the accurate integration of the time-dependent Schrödinger equation in nonadiabatic molecular dynamics simulations a challenge. To address this issue, we present an approximation to the TDC based on a norm-preserving interpolation (NPI) of the adiabatic electronic wave functions within each time step. We apply NPI and two other schemes for computing the TDC in numerical simulations of the Landau-Zener model, comparing the simulated transfer probabilities to the exact solution. Though NPI does not require the analytical calculation of nonadiabatic coupling matrix elements, it consistently yields unsigned population transfer probability errors of ∼0.001, whereas analytical calculation of the TDC yields errors of 0.0-1.0 depending on the time step, the offset of the maximum in the TDC from the beginning of the time step, and the coupling strength. The approximation of Hammes-Schiffer and Tully yields errors intermediate between NPI and the analytical scheme. PMID:26279558
NASA Astrophysics Data System (ADS)
Rey, M.; Nikitin, A. V.; Tyuterev, V.
2014-06-01
Knowledge of near infrared intensities of rovibrational transitions of polyatomic molecules is essential for the modeling of various planetary atmospheres, brown dwarfs and for other astrophysical applications 1,2,3. For example, to analyze exoplanets, atmospheric models have been developed, thus making the need to provide accurate spectroscopic data. Consequently, the spectral characterization of such planetary objects relies on the necessity of having adequate and reliable molecular data in extreme conditions (temperature, optical path length, pressure). On the other hand, in the modeling of astrophysical opacities, millions of lines are generally involved and the line-by-line extraction is clearly not feasible in laboratory measurements. It is thus suggested that this large amount of data could be interpreted only by reliable theoretical predictions. There exists essentially two theoretical approaches for the computation and prediction of spectra. The first one is based on empirically-fitted effective spectroscopic models. Another way for computing energies, line positions and intensities is based on global variational calculations using ab initio surfaces. They do not yet reach the spectroscopic accuracy stricto sensu but implicitly account for all intramolecular interactions including resonance couplings in a wide spectral range. The final aim of this work is to provide reliable predictions which could be quantitatively accurate with respect to the precision of available observations and as complete as possible. All this thus requires extensive first-principles quantum mechanical calculations essentially based on three necessary ingredients which are (i) accurate intramolecular potential energy surface and dipole moment surface components well-defined in a large range of vibrational displacements and (ii) efficient computational methods combined with suitable choices of coordinates to account for molecular symmetry properties and to achieve a good numerical
Numerical Prediction of Dust. Chapter 10
NASA Technical Reports Server (NTRS)
Benedetti, Angela; Baldasano, J. M.; Basart, S.; Benincasa, F.; Boucher, O.; Brooks, M.; Chen, J. P.; Colarco, P. R.; Gong, S.; Huneeus, N.; Jones, L; Lu, S.; Menut, L.; Mulcahy, J.; Nickovic, S.; Morcrette, J.-J.; Perez, C.; Reid, J. S.; Sekiyama, T. T.; Tanaka, T.; Terradellas, E.; Westphal, D. L.; Zhang, X.-Y.; Zhou, C.-H.
2013-01-01
. Scientific observations and results are presented, along with numerous illustrations. This work has an interdisciplinary appeal and will engage scholars in geology, geography, chemistry, meteorology and physics, amongst others with an interest in the Earth system and environmental change.
Behavior Laws And Their Influences On Numerical Prediction
Lemoine, Xavier
2007-04-07
Many studies show that the improvement of the forming numerical prediction for rolled sheets is done through laws of increasingly complex behavior, in particular by combination of the isotropic and kinematic hardening (mixed hardening) to take account of the Baushinger effect. This present work classifies the steel grades compared to the Baushinger effect. For some forming cases, it shows also the influence of a mixed hardening law on this numerical prediction, in term of deformation, thinning, residual stresses, and punch force..
TOPLHA: an accurate and efficient numerical tool for analysis and design of LH antennas
NASA Astrophysics Data System (ADS)
Milanesio, D.; Lancellotti, V.; Meneghini, O.; Maggiora, R.; Vecchi, G.; Bilato, R.
2007-09-01
Auxiliary ICRF heating systems in tokamaks often involve large complex antennas, made up of several conducting straps hosted in distinct cavities that open towards the plasma. The same holds especially true in the LH regime, wherein the antennas are comprised of arrays of many phased waveguides. Upon observing that the various cavities or waveguides couple to each other only through the EM fields existing over the plasma-facing apertures, we self-consistently formulated the EM problem by a convenient set of multiple coupled integral equations. Subsequent application of the Method of Moments yields a highly sparse algebraic system; therefore formal inversion of the system matrix happens to be not so memory demanding, despite the number of unknowns may be quite large (typically 105 or so). The overall strategy has been implemented in an enhanced version of TOPICA (Torino Polytechnic Ion Cyclotron Antenna) and in a newly developed code named TOPLHA (Torino Polytechnic Lower Hybrid Antenna). Both are simulation and prediction tools for plasma facing antennas that incorporate commercial-grade 3D graphic interfaces along with an accurate description of the plasma. In this work we present the new proposed formulation along with examples of application to real life large LH antenna systems.
TOPICA: an accurate and efficient numerical tool for analysis and design of ICRF antennas
NASA Astrophysics Data System (ADS)
Lancellotti, V.; Milanesio, D.; Maggiora, R.; Vecchi, G.; Kyrytsya, V.
2006-07-01
The demand for a predictive tool to help in designing ion-cyclotron radio frequency (ICRF) antenna systems for today's fusion experiments has driven the development of codes such as ICANT, RANT3D, and the early development of TOPICA (TOrino Polytechnic Ion Cyclotron Antenna) code. This paper describes the substantive evolution of TOPICA formulation and implementation that presently allow it to handle the actual geometry of ICRF antennas (with curved, solid straps, a general-shape housing, Faraday screen, etc) as well as an accurate plasma description, accounting for density and temperature profiles and finite Larmor radius effects. The antenna is assumed to be housed in a recess-like enclosure. Both goals have been attained by formally separating the problem into two parts: the vacuum region around the antenna and the plasma region inside the toroidal chamber. Field continuity and boundary conditions allow formulating of a set of two coupled integral equations for the unknown equivalent (current) sources; then the equations are reduced to a linear system by a method of moments solution scheme employing 2D finite elements defined over a 3D non-planar surface triangular-cell mesh. In the vacuum region calculations are done in the spatial (configuration) domain, whereas in the plasma region a spectral (wavenumber) representation of fields and currents is adopted, thus permitting a description of the plasma by a surface impedance matrix. Owing to this approach, any plasma model can be used in principle, and at present the FELICE code has been employed. The natural outcomes of TOPICA are the induced currents on the conductors (antenna, housing, etc) and the electric field in front of the plasma, whence the antenna circuit parameters (impedance/scattering matrices), the radiated power and the fields (at locations other than the chamber aperture) are then obtained. An accurate model of the feeding coaxial lines is also included. The theoretical model and its TOPICA
Cas9-chromatin binding information enables more accurate CRISPR off-target prediction
Singh, Ritambhara; Kuscu, Cem; Quinlan, Aaron; Qi, Yanjun; Adli, Mazhar
2015-01-01
The CRISPR system has become a powerful biological tool with a wide range of applications. However, improving targeting specificity and accurately predicting potential off-targets remains a significant goal. Here, we introduce a web-based CRISPR/Cas9 Off-target Prediction and Identification Tool (CROP-IT) that performs improved off-target binding and cleavage site predictions. Unlike existing prediction programs that solely use DNA sequence information; CROP-IT integrates whole genome level biological information from existing Cas9 binding and cleavage data sets. Utilizing whole-genome chromatin state information from 125 human cell types further enhances its computational prediction power. Comparative analyses on experimentally validated datasets show that CROP-IT outperforms existing computational algorithms in predicting both Cas9 binding as well as cleavage sites. With a user-friendly web-interface, CROP-IT outputs scored and ranked list of potential off-targets that enables improved guide RNA design and more accurate prediction of Cas9 binding or cleavage sites. PMID:26032770
Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L
2016-08-01
Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. PMID:27260782
NASA Astrophysics Data System (ADS)
Liu, Qianlong; Reifsnider, Kenneth
2012-11-01
The basis of dielectrophoresis (DEP) is the prediction of the force and torque on particles. The classical approach to the prediction is based on the effective moment method, which, however, is an approximate approach, assumes infinitesimal particles. Therefore, it is well-known that for finite-sized particles, the DEP approximation is inaccurate as the mutual field, particle, wall interactions become strong, a situation presently attracting extensive research for practical significant applications. In the present talk, we provide accurate calculations of the force and torque on the particles from first principles, by directly resolving the local geometry and properties and accurately accounting for the mutual interactions for finite-sized particles with both dielectric polarization and conduction in a sinusoidally steady-state electric field. Since the approach has a significant advantage, compared to other numerical methods, to efficiently simulate many closely packed particles, it provides an important, unique, and accurate technique to investigate complex DEP phenomena, for example heterogeneous mixtures containing particle chains, nanoparticle assembly, biological cells, non-spherical effects, etc. This study was supported by the Department of Energy under funding for an EFRC (the HeteroFoaM Center), grant no. DE-SC0001061.
Operational Numerical Prediction of Thunderstorms: It's Just Around the Corner
NASA Astrophysics Data System (ADS)
Droegemeier, Kelvin K.
1996-05-01
The Center for Analysis and Prediction of Storms (CAPS), an NSF Science and Technology Center at the University of Oklahoma, is redefining the notion of local weather forecasts by developing techniques for the numerical prediction of individual spring and winter storms up to 6 hours in advance. In this presentation, I describe the two principal elements of the CAPS program: new techniques being developed to retrieve unobservable parameters from single-Doppler radar data and assimilate them into a forecast system, and a new multi-scale prediction model - the Advanced Regional Prediction System, that has been designed specifically for massively-parallel computers. Further, I present results from the spring 1995 operational evaluation of the ARPS over Central Oklahoma, and discuss how this new technology is being used to help commercial airlines and the defense community utilize small-scale numerical weather forecasts in tactical operations.
Accurate rotor loads prediction using the FLAP (Force and Loads Analysis Program) dynamics code
Wright, A.D.; Thresher, R.W.
1987-10-01
Accurately predicting wind turbine blade loads and response is very important in predicting the fatigue life of wind turbines. There is a clear need in the wind turbine community for validated and user-friendly structural dynamics codes for predicting blade loads and response. At the Solar Energy Research Institute (SERI), a Force and Loads Analysis Program (FLAP) has been refined and validated and is ready for general use. Currently, FLAP is operational on an IBM-PC compatible computer and can be used to analyze both rigid- and teetering-hub configurations. The results of this paper show that FLAP can be used to accurately predict the deterministic loads for rigid-hub rotors. This paper compares analytical predictions to field test measurements for a three-bladed, upwind turbine with a rigid-hub configuration. The deterministic loads predicted by FLAP are compared with 10-min azimuth averages of blade root flapwise bending moments for different wind speeds. 6 refs., 12 figs., 3 tabs.
Accurate prediction of protein–protein interactions from sequence alignments using a Bayesian method
Burger, Lukas; van Nimwegen, Erik
2008-01-01
Accurate and large-scale prediction of protein–protein interactions directly from amino-acid sequences is one of the great challenges in computational biology. Here we present a new Bayesian network method that predicts interaction partners using only multiple alignments of amino-acid sequences of interacting protein domains, without tunable parameters, and without the need for any training examples. We first apply the method to bacterial two-component systems and comprehensively reconstruct two-component signaling networks across all sequenced bacteria. Comparisons of our predictions with known interactions show that our method infers interaction partners genome-wide with high accuracy. To demonstrate the general applicability of our method we show that it also accurately predicts interaction partners in a recent dataset of polyketide synthases. Analysis of the predicted genome-wide two-component signaling networks shows that cognates (interacting kinase/regulator pairs, which lie adjacent on the genome) and orphans (which lie isolated) form two relatively independent components of the signaling network in each genome. In addition, while most genes are predicted to have only a small number of interaction partners, we find that 10% of orphans form a separate class of ‘hub' nodes that distribute and integrate signals to and from up to tens of different interaction partners. PMID:18277381
NASA Astrophysics Data System (ADS)
Grasso, Robert J.; Russo, Leonard P.; Barrett, John L.; Odhner, Jefferson E.; Egbert, Paul I.
2007-09-01
BAE Systems presents the results of a program to model the performance of Raman LIDAR systems for the remote detection of atmospheric gases, air polluting hydrocarbons, chemical and biological weapons, and other molecular species of interest. Our model, which integrates remote Raman spectroscopy, 2D and 3D LADAR, and USAF atmospheric propagation codes permits accurate determination of the performance of a Raman LIDAR system. The very high predictive performance accuracy of our model is due to the very accurate calculation of the differential scattering cross section for the specie of interest at user selected wavelengths. We show excellent correlation of our calculated cross section data, used in our model, with experimental data obtained from both laboratory measurements and the published literature. In addition, the use of standard USAF atmospheric models provides very accurate determination of the atmospheric extinction at both the excitation and Raman shifted wavelengths.
Towards Bridging the Gaps in Holistic Transition Prediction via Numerical Simulations
NASA Technical Reports Server (NTRS)
Choudhari, Meelan M.; Li, Fei; Duan, Lian; Chang, Chau-Lyan; Carpenter, Mark H.; Streett, Craig L.; Malik, Mujeeb R.
2013-01-01
The economic and environmental benefits of laminar flow technology via reduced fuel burn of subsonic and supersonic aircraft cannot be realized without minimizing the uncertainty in drag prediction in general and transition prediction in particular. Transition research under NASA's Aeronautical Sciences Project seeks to develop a validated set of variable fidelity prediction tools with known strengths and limitations, so as to enable "sufficiently" accurate transition prediction and practical transition control for future vehicle concepts. This paper provides a summary of selected research activities targeting the current gaps in high-fidelity transition prediction, specifically those related to the receptivity and laminar breakdown phases of crossflow induced transition in a subsonic swept-wing boundary layer. The results of direct numerical simulations are used to obtain an enhanced understanding of the laminar breakdown region as well as to validate reduced order prediction methods.
Accurate Prediction of Ligand Affinities for a Proton-Dependent Oligopeptide Transporter.
Samsudin, Firdaus; Parker, Joanne L; Sansom, Mark S P; Newstead, Simon; Fowler, Philip W
2016-02-18
Membrane transporters are critical modulators of drug pharmacokinetics, efficacy, and safety. One example is the proton-dependent oligopeptide transporter PepT1, also known as SLC15A1, which is responsible for the uptake of the ?-lactam antibiotics and various peptide-based prodrugs. In this study, we modeled the binding of various peptides to a bacterial homolog, PepTSt, and evaluated a range of computational methods for predicting the free energy of binding. Our results show that a hybrid approach (endpoint methods to classify peptides into good and poor binders and a theoretically exact method for refinement) is able to accurately predict affinities, which we validated using proteoliposome transport assays. Applying the method to a homology model of PepT1 suggests that the approach requires a high-quality structure to be accurate. Our study provides a blueprint for extending these computational methodologies to other pharmaceutically important transporter families. PMID:27028887
Accurate Prediction of Ligand Affinities for a Proton-Dependent Oligopeptide Transporter
Samsudin, Firdaus; Parker, Joanne L.; Sansom, Mark S.P.; Newstead, Simon; Fowler, Philip W.
2016-01-01
Summary Membrane transporters are critical modulators of drug pharmacokinetics, efficacy, and safety. One example is the proton-dependent oligopeptide transporter PepT1, also known as SLC15A1, which is responsible for the uptake of the β-lactam antibiotics and various peptide-based prodrugs. In this study, we modeled the binding of various peptides to a bacterial homolog, PepTSt, and evaluated a range of computational methods for predicting the free energy of binding. Our results show that a hybrid approach (endpoint methods to classify peptides into good and poor binders and a theoretically exact method for refinement) is able to accurately predict affinities, which we validated using proteoliposome transport assays. Applying the method to a homology model of PepT1 suggests that the approach requires a high-quality structure to be accurate. Our study provides a blueprint for extending these computational methodologies to other pharmaceutically important transporter families. PMID:27028887
A Single Linear Prediction Filter that Accurately Predicts the AL Index
NASA Astrophysics Data System (ADS)
McPherron, R. L.; Chu, X.
2015-12-01
The AL index is a measure of the strength of the westward electrojet flowing along the auroral oval. It has two components: one from the global DP-2 current system and a second from the DP-1 current that is more localized near midnight. It is generally believed that the index a very poor measure of these currents because of its dependence on the distance of stations from the source of the two currents. In fact over season and solar cycle the coupling strength defined as the steady state ratio of the output AL to the input coupling function varies by a factor of four. There are four factors that lead to this variation. First is the equinoctial effect that modulates coupling strength with peaks (strongest coupling) at the equinoxes. Second is the saturation of the polar cap potential which decreases coupling strength as the strength of the driver increases. Since saturation occurs more frequently at solar maximum we obtain the result that maximum coupling strength occurs at equinox at solar minimum. A third factor is ionospheric conductivity with stronger coupling at summer solstice as compared to winter. The fourth factor is the definition of a solar wind coupling function appropriate to a given index. We have developed an optimum coupling function depending on solar wind speed, density, transverse magnetic field, and IMF clock angle which is better than previous functions. Using this we have determined the seasonal variation of coupling strength and developed an inverse function that modulates the optimum coupling function so that all seasonal variation is removed. In a similar manner we have determined the dependence of coupling strength on solar wind driver strength. The inverse of this function is used to scale a linear prediction filter thus eliminating the dependence on driver strength. Our result is a single linear filter that is adjusted in a nonlinear manner by driver strength and an optimum coupling function that is seasonal modulated. Together this
A review of the kinetic detail required for accurate predictions of normal shock waves
NASA Technical Reports Server (NTRS)
Muntz, E. P.; Erwin, Daniel A.; Pham-Van-diep, Gerald C.
1991-01-01
Several aspects of the kinetic models used in the collision phase of Monte Carlo direct simulations have been studied. Accurate molecular velocity distribution function predictions require a significantly increased number of computational cells in one maximum slope shock thickness, compared to predictions of macroscopic properties. The shape of the highly repulsive portion of the interatomic potential for argon is not well modeled by conventional interatomic potentials; this portion of the potential controls high Mach number shock thickness predictions, indicating that the specification of the energetic repulsive portion of interatomic or intermolecular potentials must be chosen with care for correct modeling of nonequilibrium flows at high temperatures. It has been shown for inverse power potentials that the assumption of variable hard sphere scattering provides accurate predictions of the macroscopic properties in shock waves, by comparison with simulations in which differential scattering is employed in the collision phase. On the other hand, velocity distribution functions are not well predicted by the variable hard sphere scattering model for softer potentials at higher Mach numbers.
Can phenological models predict tree phenology accurately under climate change conditions?
NASA Astrophysics Data System (ADS)
Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry
2014-05-01
The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay
Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry
2016-10-01
The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future. PMID:27272707
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
2015-01-01
Background Biclustering is a popular method for identifying under which experimental conditions biological signatures are co-expressed. However, the general biclustering problem is NP-hard, offering room to focus algorithms on specific biological tasks. We hypothesize that conditional co-regulation of genes is a key factor in determining cell phenotype and that accurately segregating conditions in biclusters will improve such predictions. Thus, we developed a bicluster sampled coherence metric (BSCM) for determining which conditions and signals should be included in a bicluster. Results Our BSCM calculates condition and cluster size specific p-values, and we incorporated these into the popular integrated biclustering algorithm cMonkey. We demonstrate that incorporation of our new algorithm significantly improves bicluster co-regulation scores (p-value = 0.009) and GO annotation scores (p-value = 0.004). Additionally, we used a bicluster based signal to predict whether a given experimental condition will result in yeast peroxisome induction. Using the new algorithm, the classifier accuracy improves from 41.9% to 76.1% correct. Conclusions We demonstrate that the proposed BSCM helps determine which signals ought to be co-clustered, resulting in more accurately assigned bicluster membership. Furthermore, we show that BSCM can be extended to more accurately detect under which experimental conditions the genes are co-clustered. Features derived from this more accurate analysis of conditional regulation results in a dramatic improvement in the ability to predict a cellular phenotype in yeast. The latest cMonkey is available for download at https://github.com/baliga-lab/cmonkey2. The experimental data and source code featured in this paper is available http://AitchisonLab.com/BSCM. BSCM has been incorporated in the official cMonkey release. PMID:25881257
Kieslich, Chris A.; Tamamis, Phanourios; Guzman, Yannis A.; Onel, Melis; Floudas, Christodoulos A.
2016-01-01
HIV-1 entry into host cells is mediated by interactions between the V3-loop of viral glycoprotein gp120 and chemokine receptor CCR5 or CXCR4, collectively known as HIV-1 coreceptors. Accurate genotypic prediction of coreceptor usage is of significant clinical interest and determination of the factors driving tropism has been the focus of extensive study. We have developed a method based on nonlinear support vector machines to elucidate the interacting residue pairs driving coreceptor usage and provide highly accurate coreceptor usage predictions. Our models utilize centroid-centroid interaction energies from computationally derived structures of the V3-loop:coreceptor complexes as primary features, while additional features based on established rules regarding V3-loop sequences are also investigated. We tested our method on 2455 V3-loop sequences of various lengths and subtypes, and produce a median area under the receiver operator curve of 0.977 based on 500 runs of 10-fold cross validation. Our study is the first to elucidate a small set of specific interacting residue pairs between the V3-loop and coreceptors capable of predicting coreceptor usage with high accuracy across major HIV-1 subtypes. The developed method has been implemented as a web tool named CRUSH, CoReceptor USage prediction for HIV-1, which is available at http://ares.tamu.edu/CRUSH/. PMID:26859389
Accurate similarity index based on activity and connectivity of node for link prediction
NASA Astrophysics Data System (ADS)
Li, Longjie; Qian, Lvjian; Wang, Xiaoping; Luo, Shishun; Chen, Xiaoyun
2015-05-01
Recent years have witnessed the increasing of available network data; however, much of those data is incomplete. Link prediction, which can find the missing links of a network, plays an important role in the research and analysis of complex networks. Based on the assumption that two unconnected nodes which are highly similar are very likely to have an interaction, most of the existing algorithms solve the link prediction problem by computing nodes' similarities. The fundamental requirement of those algorithms is accurate and effective similarity indices. In this paper, we propose a new similarity index, namely similarity based on activity and connectivity (SAC), which performs link prediction more accurately. To compute the similarity between two nodes, this index employs the average activity of these two nodes in their common neighborhood and the connectivities between them and their common neighbors. The higher the average activity is and the stronger the connectivities are, the more similar the two nodes are. The proposed index not only commendably distinguishes the contributions of paths but also incorporates the influence of endpoints. Therefore, it can achieve a better predicting result. To verify the performance of SAC, we conduct experiments on 10 real-world networks. Experimental results demonstrate that SAC outperforms the compared baselines.
Doré, Bruce P; Meksin, Robert; Mather, Mara; Hirst, William; Ochsner, Kevin N
2016-06-01
In the aftermath of a national tragedy, important decisions are predicated on judgments of the emotional significance of the tragedy in the present and future. Research in affective forecasting has largely focused on ways in which people fail to make accurate predictions about the nature and duration of feelings experienced in the aftermath of an event. Here we ask a related but understudied question: can people forecast how they will feel in the future about a tragic event that has already occurred? We found that people were strikingly accurate when predicting how they would feel about the September 11 attacks over 1-, 2-, and 7-year prediction intervals. Although people slightly under- or overestimated their future feelings at times, they nonetheless showed high accuracy in forecasting (a) the overall intensity of their future negative emotion, and (b) the relative degree of different types of negative emotion (i.e., sadness, fear, or anger). Using a path model, we found that the relationship between forecasted and actual future emotion was partially mediated by current emotion and remembered emotion. These results extend theories of affective forecasting by showing that emotional responses to an event of ongoing national significance can be predicted with high accuracy, and by identifying current and remembered feelings as independent sources of this accuracy. (PsycINFO Database Record PMID:27100309
NASA Astrophysics Data System (ADS)
Kuo, K. A.; Verbraken, H.; Degrande, G.; Lombaert, G.
2016-07-01
Along with the rapid expansion of urban rail networks comes the need for accurate predictions of railway induced vibration levels at grade and in buildings. Current computational methods for making predictions of railway induced ground vibration rely on simplifying modelling assumptions and require detailed parameter inputs, which lead to high levels of uncertainty. It is possible to mitigate against these issues using a combination of field measurements and state-of-the-art numerical methods, known as a hybrid model. In this paper, two hybrid models are developed, based on the use of separate source and propagation terms that are quantified using in situ measurements or modelling results. These models are implemented using term definitions proposed by the Federal Railroad Administration and assessed using the specific illustration of a surface railway. It is shown that the limitations of numerical and empirical methods can be addressed in a hybrid procedure without compromising prediction accuracy.
Towards more accurate wind and solar power prediction by improving NWP model physics
NASA Astrophysics Data System (ADS)
Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo
2014-05-01
The growing importance and successive expansion of renewable energies raise new challenges for decision makers, economists, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the errors and provide an a priori estimate of remaining uncertainties associated with the large share of weather-dependent power sources. For this purpose it is essential to optimize NWP model forecasts with respect to those prognostic variables which are relevant for wind and solar power plants. An improved weather forecast serves as the basis for a sophisticated power forecasts. Consequently, a well-timed energy trading on the stock market, and electrical grid stability can be maintained. The German Weather Service (DWD) currently is involved with two projects concerning research in the field of renewable energy, namely ORKA*) and EWeLiNE**). Whereas the latter is in collaboration with the Fraunhofer Institute (IWES), the project ORKA is led by energy & meteo systems (emsys). Both cooperate with German transmission system operators. The goal of the projects is to improve wind and photovoltaic (PV) power forecasts by combining optimized NWP and enhanced power forecast models. In this context, the German Weather Service aims to improve its model system, including the ensemble forecasting system, by working on data assimilation, model physics and statistical post processing. This presentation is focused on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. First steps leading to improved physical parameterization schemes within the NWP-model are presented. Wind mast measurements reaching up to 200 m height above ground are used for the estimation of the (NWP) wind forecast error at heights relevant for wind energy plants. One particular problem is the daily cycle in wind speed. The transition from stable stratification during
AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)
A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...
Sengupta, Arkajyoti; Raghavachari, Krishnan
2014-10-14
Accurate modeling of the chemical reactions in many diverse areas such as combustion, photochemistry, or atmospheric chemistry strongly depends on the availability of thermochemical information of the radicals involved. However, accurate thermochemical investigations of radical systems using state of the art composite methods have mostly been restricted to the study of hydrocarbon radicals of modest size. In an alternative approach, systematic error-canceling thermochemical hierarchy of reaction schemes can be applied to yield accurate results for such systems. In this work, we have extended our connectivity-based hierarchy (CBH) method to the investigation of radical systems. We have calibrated our method using a test set of 30 medium sized radicals to evaluate their heats of formation. The CBH-rad30 test set contains radicals containing diverse functional groups as well as cyclic systems. We demonstrate that the sophisticated error-canceling isoatomic scheme (CBH-2) with modest levels of theory is adequate to provide heats of formation accurate to ∼1.5 kcal/mol. Finally, we predict heats of formation of 19 other large and medium sized radicals for which the accuracy of available heats of formation are less well-known. PMID:26588131
conSSert: Consensus SVM Model for Accurate Prediction of Ordered Secondary Structure.
Kieslich, Chris A; Smadbeck, James; Khoury, George A; Floudas, Christodoulos A
2016-03-28
Accurate prediction of protein secondary structure remains a crucial step in most approaches to the protein-folding problem, yet the prediction of ordered secondary structure, specifically beta-strands, remains a challenge. We developed a consensus secondary structure prediction method, conSSert, which is based on support vector machines (SVM) and provides exceptional accuracy for the prediction of beta-strands with QE accuracy of over 0.82 and a Q2-EH of 0.86. conSSert uses as input probabilities for the three types of secondary structure (helix, strand, and coil) that are predicted by four top performing methods: PSSpred, PSIPRED, SPINE-X, and RAPTOR. conSSert was trained/tested using 4261 protein chains from PDBSelect25, and 8632 chains from PISCES. Further validation was performed using targets from CASP9, CASP10, and CASP11. Our data suggest that poor performance in strand prediction is likely a result of training bias and not solely due to the nonlocal nature of beta-sheet contacts. conSSert is freely available for noncommercial use as a webservice: http://ares.tamu.edu/conSSert/ . PMID:26928531
NASA Astrophysics Data System (ADS)
Bozinoski, Radoslav
Significant research has been performed over the last several years on understanding the unsteady aerodynamics of various fluid flows. Much of this work has focused on quantifying the unsteady, three-dimensional flow field effects which have proven vital to the accurate prediction of many fluid and aerodynamic problems. Up until recently, engineers have predominantly relied on steady-state simulations to analyze the inherently three-dimensional ow structures that are prevalent in many of today's "real-world" problems. Increases in computational capacity and the development of efficient numerical methods can change this and allow for the solution of the unsteady Reynolds-Averaged Navier-Stokes (RANS) equations for practical three-dimensional aerodynamic applications. An integral part of this capability has been the performance and accuracy of the turbulence models coupled with advanced parallel computing techniques. This report begins with a brief literature survey of the role fully three-dimensional, unsteady, Navier-Stokes solvers have on the current state of numerical analysis. Next, the process of creating a baseline three-dimensional Multi-Block FLOw procedure called MBFLO3 is presented. Solutions for an inviscid circular arc bump, laminar at plate, laminar cylinder, and turbulent at plate are then presented. Results show good agreement with available experimental, numerical, and theoretical data. Scalability data for the parallel version of MBFLO3 is presented and shows efficiencies of 90% and higher for processes of no less than 100,000 computational grid points. Next, the description and implementation techniques used for several turbulence models are presented. Following the successful implementation of the URANS and DES procedures, the validation data for separated, non-reattaching flows over a NACA 0012 airfoil, wall-mounted hump, and a wing-body junction geometry are presented. Results for the NACA 0012 showed significant improvement in flow predictions
Planar Near-Field Phase Retrieval Using GPUs for Accurate THz Far-Field Prediction
NASA Astrophysics Data System (ADS)
Junkin, Gary
2013-04-01
With a view to using Phase Retrieval to accurately predict Terahertz antenna far-field from near-field intensity measurements, this paper reports on three fundamental advances that achieve very low algorithmic error penalties. The first is a new Gaussian beam analysis that provides accurate initial complex aperture estimates including defocus and astigmatic phase errors, based only on first and second moment calculations. The second is a powerful noise tolerant near-field Phase Retrieval algorithm that combines Anderson's Plane-to-Plane (PTP) with Fienup's Hybrid-Input-Output (HIO) and Successive Over-Relaxation (SOR) to achieve increased accuracy at reduced scan separations. The third advance employs teraflop Graphical Processing Units (GPUs) to achieve practically real time near-field phase retrieval and to obtain the optimum aperture constraint without any a priori information.
Danshita, Ippei; Polkovnikov, Anatoli
2010-09-01
We study the quantum dynamics of supercurrents of one-dimensional Bose gases in a ring optical lattice to verify instanton methods applied to coherent macroscopic quantum tunneling (MQT). We directly simulate the real-time quantum dynamics of supercurrents, where a coherent oscillation between two macroscopically distinct current states occurs due to MQT. The tunneling rate extracted from the coherent oscillation is compared with that given by the instanton method. We find that the instanton method is quantitatively accurate when the effective Planck's constant is sufficiently small. We also find phase slips associated with the oscillations.
NASA Technical Reports Server (NTRS)
Ellison, Donald; Conway, Bruce; Englander, Jacob
2015-01-01
A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.
A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes
Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.; Arkin, Adam P.
2004-12-01
We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, and its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.
Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; Pronobis, Wiktor; von Lilienfeld, O. Anatole; Müller, Klaus -Robert; Tkatchenko, Alexandre
2015-06-04
Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstratemore » prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.« less
Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; Pronobis, Wiktor; von Lilienfeld, O. Anatole; Müller, Klaus -Robert; Tkatchenko, Alexandre
2015-06-04
Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.
NASA Astrophysics Data System (ADS)
Delahaye, Thibault; Rey, Michael; Tyuterev, Vladimir; Nikitin, Andrei V.; Szalay, Peter
2015-06-01
Hydrocarbons such as ethylene (C_2H_4) and methane (CH_4) are of considerable interest for the modeling of planetary atmospheres and other astrophysical applications. Knowledge of rovibrational transitions of hydrocarbons is of primary importance in many fields but remains a formidable challenge for the theory and spectral analysis. Essentially two theoretical approaches for the computation and prediction of spectra exist. The first one is based on empirically-fitted effective spectroscopic models. Several databases aim at collecting the corresponding data but the information about C_2H_4 spectrum present in these databases remains limited, only some spectral ranges around 1000, 3000 and 6000 cm-1 being available. Another way for computing energies, line positions and intensities is based on global variational calculations using ab initio surfaces. Although they do not yet reach the spectroscopic accuracy, they could provide reliable predictions which could be quantitatively accurate with respect to the precision of available observations and as complete as possible. All this thus requires extensive first-principles quantum mechanical calculations essentially based on two necessary ingredients: (i) accurate intramolecular potential energy surface and dipole moment surface components and (ii) efficient computational methods to achieve a good numerical convergence. We report predictions of vibrational and rovibrational energy levels of C_2H_4 using our new ground state potential energy surface obtained from extended ab initio calculations. Additionally we will introduce line positions and line intensities predictions based on a new dipole moment surface for ethylene. These results will be compared with previous works on ethylene and its isotopologues.
2014-01-01
Predicting the binding affinities of large sets of diverse molecules against a range of macromolecular targets is an extremely challenging task. The scoring functions that attempt such computational prediction are essential for exploiting and analyzing the outputs of docking, which is in turn an important tool in problems such as structure-based drug design. Classical scoring functions assume a predetermined theory-inspired functional form for the relationship between the variables that describe an experimentally determined or modeled structure of a protein–ligand complex and its binding affinity. The inherent problem of this approach is in the difficulty of explicitly modeling the various contributions of intermolecular interactions to binding affinity. New scoring functions based on machine-learning regression models, which are able to exploit effectively much larger amounts of experimental data and circumvent the need for a predetermined functional form, have already been shown to outperform a broad range of state-of-the-art scoring functions in a widely used benchmark. Here, we investigate the impact of the chemical description of the complex on the predictive power of the resulting scoring function using a systematic battery of numerical experiments. The latter resulted in the most accurate scoring function to date on the benchmark. Strikingly, we also found that a more precise chemical description of the protein–ligand complex does not generally lead to a more accurate prediction of binding affinity. We discuss four factors that may contribute to this result: modeling assumptions, codependence of representation and regression, data restricted to the bound state, and conformational heterogeneity in data. PMID:24528282
NASA Technical Reports Server (NTRS)
Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris
2011-01-01
A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.
NASA Astrophysics Data System (ADS)
Jiang, Yongfei; Zhang, Jun; Zhao, Wanhua
2015-05-01
Hemodynamics altered by stent implantation is well-known to be closely related to in-stent restenosis. Computational fluid dynamics (CFD) method has been used to investigate the hemodynamics in stented arteries in detail and help to analyze the performances of stents. In this study, blood models with Newtonian or non-Newtonian properties were numerically investigated for the hemodynamics at steady or pulsatile inlet conditions respectively employing CFD based on the finite volume method. The results showed that the blood model with non-Newtonian property decreased the area of low wall shear stress (WSS) compared with the blood model with Newtonian property and the magnitude of WSS varied with the magnitude and waveform of the inlet velocity. The study indicates that the inlet conditions and blood models are all important for accurately predicting the hemodynamics. This will be beneficial to estimate the performances of stents and also help clinicians to select the proper stents for the patients.
SIFTER search: a web server for accurate phylogeny-based protein function prediction.
Sahraeian, Sayed M; Luo, Kevin R; Brenner, Steven E
2015-07-01
We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. The SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded. PMID:25979264
Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias
2015-01-01
Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions. PMID:25794106
Ihm, Yungok; Cooper, Valentino R; Gallego, Nidia C; Contescu, Cristian I; Morris, James R
2014-01-01
We demonstrate a successful, efficient framework for predicting gas adsorption properties in real materials based on first-principles calculations, with a specific comparison of experiment and theory for methane adsorption in activated carbons. These carbon materials have different pore size distributions, leading to a variety of uptake characteristics. Utilizing these distributions, we accurately predict experimental uptakes and heats of adsorption without empirical potentials or lengthy simulations. We demonstrate that materials with smaller pores have higher heats of adsorption, leading to a higher gas density in these pores. This pore-size dependence must be accounted for, in order to predict and understand the adsorption behavior. The theoretical approach combines: (1) ab initio calculations with a van der Waals density functional to determine adsorbent-adsorbate interactions, and (2) a thermodynamic method that predicts equilibrium adsorption densities by directly incorporating the calculated potential energy surface in a slit pore model. The predicted uptake at P=20 bar and T=298 K is in excellent agreement for all five activated carbon materials used. This approach uses only the pore-size distribution as an input, with no fitting parameters or empirical adsorbent-adsorbate interactions, and thus can be easily applied to other adsorbent-adsorbate combinations.
NASA Astrophysics Data System (ADS)
Ben Ali, Jaouher; Chebel-Morello, Brigitte; Saidi, Lotfi; Malinowski, Simon; Fnaiech, Farhat
2015-05-01
Accurate remaining useful life (RUL) prediction of critical assets is an important challenge in condition based maintenance to improve reliability and decrease machine's breakdown and maintenance's cost. Bearing is one of the most important components in industries which need to be monitored and the user should predict its RUL. The challenge of this study is to propose an original feature able to evaluate the health state of bearings and to estimate their RUL by Prognostics and Health Management (PHM) techniques. In this paper, the proposed method is based on the data-driven prognostic approach. The combination of Simplified Fuzzy Adaptive Resonance Theory Map (SFAM) neural network and Weibull distribution (WD) is explored. WD is used just in the training phase to fit measurement and to avoid areas of fluctuation in the time domain. SFAM training process is based on fitted measurements at present and previous inspection time points as input. However, the SFAM testing process is based on real measurements at present and previous inspections. Thanks to the fuzzy learning process, SFAM has an important ability and a good performance to learn nonlinear time series. As output, seven classes are defined; healthy bearing and six states for bearing degradation. In order to find the optimal RUL prediction, a smoothing phase is proposed in this paper. Experimental results show that the proposed method can reliably predict the RUL of rolling element bearings (REBs) based on vibration signals. The proposed prediction approach can be applied to prognostic other various mechanical assets.
SIFTER search: a web server for accurate phylogeny-based protein function prediction
Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.
2015-05-15
We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.
SIFTER search: a web server for accurate phylogeny-based protein function prediction
Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.
2015-05-15
We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access tomore » precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.« less
Accurate verification of the conserved-vector-current and standard-model predictions
Sirlin, A.; Zucchini, R.
1986-10-20
An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.
2006-01-01
Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.
Differential-equation-based representation of truncation errors for accurate numerical simulation
NASA Astrophysics Data System (ADS)
MacKinnon, Robert J.; Johnson, Richard W.
1991-09-01
High-order compact finite difference schemes for 2D convection-diffusion-type differential equations with constant and variable convection coefficients are derived. The governing equations are employed to represent leading truncation terms, including cross-derivatives, making the overall O(h super 4) schemes conform to a 3 x 3 stencil. It is shown that the two-dimensional constant coefficient scheme collapses to the optimal scheme for the one-dimensional case wherein the finite difference equation yields nodally exact results. The two-dimensional schemes are tested against standard model problems, including a Navier-Stokes application. Results show that the two schemes are generally more accurate, on comparable grids, than O(h super 2) centered differencing and commonly used O(h) and O(h super 3) upwinding schemes.
Towards more accurate numerical modeling of impedance based high frequency harmonic vibration
NASA Astrophysics Data System (ADS)
Lim, Yee Yan; Kiong Soh, Chee
2014-03-01
The application of smart materials in various fields of engineering has recently become increasingly popular. For instance, the high frequency based electromechanical impedance (EMI) technique employing smart piezoelectric materials is found to be versatile in structural health monitoring (SHM). Thus far, considerable efforts have been made to study and improve the technique. Various theoretical models of the EMI technique have been proposed in an attempt to better understand its behavior. So far, the three-dimensional (3D) coupled field finite element (FE) model has proved to be the most accurate. However, large discrepancies between the results of the FE model and experimental tests, especially in terms of the slope and magnitude of the admittance signatures, continue to exist and are yet to be resolved. This paper presents a series of parametric studies using the 3D coupled field finite element method (FEM) on all properties of materials involved in the lead zirconate titanate (PZT) structure interaction of the EMI technique, to investigate their effect on the admittance signatures acquired. FE model updating is then performed by adjusting the parameters to match the experimental results. One of the main reasons for the lower accuracy, especially in terms of magnitude and slope, of previous FE models is the difficulty in determining the damping related coefficients and the stiffness of the bonding layer. In this study, using the hysteretic damping model in place of Rayleigh damping, which is used by most researchers in this field, and updated bonding stiffness, an improved and more accurate FE model is achieved. The results of this paper are expected to be useful for future study of the subject area in terms of research and application, such as modeling, design and optimization.
Geng, Hao; Jiang, Fan; Wu, Yun-Dong
2016-05-19
Cyclic peptides (CPs) are promising candidates for drugs, chemical biology tools, and self-assembling nanomaterials. However, the development of reliable and accurate computational methods for their structure prediction has been challenging. Here, 20 all-trans CPs of 5-12 residues selected from Cambridge Structure Database have been simulated using replica-exchange molecular dynamics with four different force fields. Our recently developed residue-specific force fields RSFF1 and RSFF2 can correctly identify the crystal-like conformations of more than half CPs as the most populated conformation. The RSFF2 performs the best, which consistently predicts the crystal structures of 17 out of 20 CPs with rmsd < 1.1 Å. We also compared the backbone (ϕ, ψ) sampling of residues in CPs with those in short linear peptides and in globular proteins. In general, unlike linear peptides, CPs have local conformational free energies and entropies quite similar to globular proteins. PMID:27128113
Predicting ICME Magnetic Fields with a Numerical Flux Rope Model
NASA Astrophysics Data System (ADS)
Manchester, W.; van der Holst, B.; Sokolov, I.
2014-12-01
Coronal mass ejections (CMEs) are a dramatic manifestation of solar activity that release vast amounts of plasma into the heliosphere, and have many effects on the interplanetary medium and on planetary atmospheres, and are the major driver of space weather. CMEs occur with the formation and expulsion of large-scale flux ropes from the solar corona, which are routinely observed in interplanetary space. Simulating and predicting the structure and dynamics of these ICME magnetic fields is essential to the progress of heliospheric science and space weather prediction. We combine observations made by different observing techniques of CME events to develop a numerical model capable of predicting the magnetic field of interplanetary coronal mass ejections (ICMES). Photospheric magnetic field measurements from SOHO/MDI and SDO/HMI are used to specify a coronal magnetic flux rope that drives the CMEs. We examine halo CMEs events that produced clearly observed magnetic clouds at Earth and present our model predictions of these events with an emphasis placed on the z component of the magnetic field. Comparison of the MHD model predictions with coronagraph observations and in-situ data allow us to robustly determine the parameters that define the initial state of the driving flux rope, thus providing a predictive model.
Accurate prediction of helix interactions and residue contacts in membrane proteins.
Hönigschmid, Peter; Frishman, Dmitrij
2016-04-01
Accurate prediction of intra-molecular interactions from amino acid sequence is an important pre-requisite for obtaining high-quality protein models. Over the recent years, remarkable progress in this area has been achieved through the application of novel co-variation algorithms, which eliminate transitive evolutionary connections between residues. In this work we present a new contact prediction method for α-helical transmembrane proteins, MemConP, in which evolutionary couplings are combined with a machine learning approach. MemConP achieves a substantially improved accuracy (precision: 56.0%, recall: 17.5%, MCC: 0.288) compared to the use of either machine learning or co-evolution methods alone. The method also achieves 91.4% precision, 42.1% recall and a MCC of 0.490 in predicting helix-helix interactions based on predicted contacts. The approach was trained and rigorously benchmarked by cross-validation and independent testing on up-to-date non-redundant datasets of 90 and 30 experimental three dimensional structures, respectively. MemConP is a standalone tool that can be downloaded together with the associated training data from http://webclu.bio.wzw.tum.de/MemConP. PMID:26851352
Base-resolution methylation patterns accurately predict transcription factor bindings in vivo
Xu, Tianlei; Li, Ben; Zhao, Meng; Szulwach, Keith E.; Street, R. Craig; Lin, Li; Yao, Bing; Zhang, Feiran; Jin, Peng; Wu, Hao; Qin, Zhaohui S.
2015-01-01
Detecting in vivo transcription factor (TF) binding is important for understanding gene regulatory circuitries. ChIP-seq is a powerful technique to empirically define TF binding in vivo. However, the multitude of distinct TFs makes genome-wide profiling for them all labor-intensive and costly. Algorithms for in silico prediction of TF binding have been developed, based mostly on histone modification or DNase I hypersensitivity data in conjunction with DNA motif and other genomic features. However, technical limitations of these methods prevent them from being applied broadly, especially in clinical settings. We conducted a comprehensive survey involving multiple cell lines, TFs, and methylation types and found that there are intimate relationships between TF binding and methylation level changes around the binding sites. Exploiting the connection between DNA methylation and TF binding, we proposed a novel supervised learning approach to predict TF–DNA interaction using data from base-resolution whole-genome methylation sequencing experiments. We devised beta-binomial models to characterize methylation data around TF binding sites and the background. Along with other static genomic features, we adopted a random forest framework to predict TF–DNA interaction. After conducting comprehensive tests, we saw that the proposed method accurately predicts TF binding and performs favorably versus competing methods. PMID:25722376
NMRDSP: an accurate prediction of protein shape strings from NMR chemical shifts and sequence data.
Mao, Wusong; Cong, Peisheng; Wang, Zhiheng; Lu, Longjian; Zhu, Zhongliang; Li, Tonghua
2013-01-01
Shape string is structural sequence and is an extremely important structure representation of protein backbone conformations. Nuclear magnetic resonance chemical shifts give a strong correlation with the local protein structure, and are exploited to predict protein structures in conjunction with computational approaches. Here we demonstrate a novel approach, NMRDSP, which can accurately predict the protein shape string based on nuclear magnetic resonance chemical shifts and structural profiles obtained from sequence data. The NMRDSP uses six chemical shifts (HA, H, N, CA, CB and C) and eight elements of structure profiles as features, a non-redundant set (1,003 entries) as the training set, and a conditional random field as a classification algorithm. For an independent testing set (203 entries), we achieved an accuracy of 75.8% for S8 (the eight states accuracy) and 87.8% for S3 (the three states accuracy). This is higher than only using chemical shifts or sequence data, and confirms that the chemical shift and the structure profile are significant features for shape string prediction and their combination prominently improves the accuracy of the predictor. We have constructed the NMRDSP web server and believe it could be employed to provide a solid platform to predict other protein structures and functions. The NMRDSP web server is freely available at http://cal.tongji.edu.cn/NMRDSP/index.jsp. PMID:24376713
NMRDSP: An Accurate Prediction of Protein Shape Strings from NMR Chemical Shifts and Sequence Data
Mao, Wusong; Cong, Peisheng; Wang, Zhiheng; Lu, Longjian; Zhu, Zhongliang; Li, Tonghua
2013-01-01
Shape string is structural sequence and is an extremely important structure representation of protein backbone conformations. Nuclear magnetic resonance chemical shifts give a strong correlation with the local protein structure, and are exploited to predict protein structures in conjunction with computational approaches. Here we demonstrate a novel approach, NMRDSP, which can accurately predict the protein shape string based on nuclear magnetic resonance chemical shifts and structural profiles obtained from sequence data. The NMRDSP uses six chemical shifts (HA, H, N, CA, CB and C) and eight elements of structure profiles as features, a non-redundant set (1,003 entries) as the training set, and a conditional random field as a classification algorithm. For an independent testing set (203 entries), we achieved an accuracy of 75.8% for S8 (the eight states accuracy) and 87.8% for S3 (the three states accuracy). This is higher than only using chemical shifts or sequence data, and confirms that the chemical shift and the structure profile are significant features for shape string prediction and their combination prominently improves the accuracy of the predictor. We have constructed the NMRDSP web server and believe it could be employed to provide a solid platform to predict other protein structures and functions. The NMRDSP web server is freely available at http://cal.tongji.edu.cn/NMRDSP/index.jsp. PMID:24376713
Kottmann, Jakob S; Höfener, Sebastian; Bischoff, Florian A
2015-12-21
In the present work, we report an efficient implementation of configuration interaction singles (CIS) excitation energies and oscillator strengths using the multi-resolution analysis (MRA) framework to address the basis-set convergence of excited state computations. In MRA (ground-state) orbitals, excited states are constructed adaptively guaranteeing an overall precision. Thus not only valence but also, in particular, low-lying Rydberg states can be computed with consistent quality at the basis set limit a priori, or without special treatments, which is demonstrated using a small test set of organic molecules, basis sets, and states. We find that the new implementation of MRA-CIS excitation energy calculations is competitive with conventional LCAO calculations when the basis-set limit of medium-sized molecules is sought, which requires large, diffuse basis sets. This becomes particularly important if accurate calculations of molecular electronic absorption spectra with respect to basis-set incompleteness are required, in which both valence as well as Rydberg excitations can contribute to the molecule's UV/VIS fingerprint. PMID:25913482
New efficient optimizing techniques for Kalman filters and numerical weather prediction models
NASA Astrophysics Data System (ADS)
Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis
2016-06-01
The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.
Maturity of Operational Numerical Weather Prediction: Medium Range.
NASA Astrophysics Data System (ADS)
Kalnay, Eugenia; Lord, Stephen J.; McPherson, Ronald D.
1998-12-01
In 1939 Rossby demonstrated the usefulness of the linearized perturbation of the equations of motion for weather prediction and thus made possible the first successful numerical forecasts of the weather by Charney et al. In 1951 Charney wrote a paper on the science of numerical weather prediction (NWP), where he predicted with remarkable vision how NWP would evolve until the present. In the 1960's Lorenz discovered that the chaotic nature of the atmosphere imposes a finite limit of about two weeks to weather predictability. At that time this fundamental discovery was "only of academic interest" and not really relevant to operational weather forecasting, since at that time the accuracy of even a 2-day forecast was rather poor. Since then, however, computer-based forecasts have improved so much that Lorenz's limit of predictability is starting to become attainable in practice, especially with ensemble forecasting, and the predictabilty of longer-lasting phenomena such as El Niño is beginning to be successfully exploited.The skill of operational weather forecasts has at least doubled over the last two decades. This improvement has taken place relatively steadily, driven by a large number of scientific and computational developments, especially in the area of NWP. It has taken place in all the operational NWP centers, as friendly competition and information sharing make scientific improvements take place faster than they would in a single center. Because the improvements have occurred steadily, rather than suddenly, the overall increase in forecast skill due to NWP has not been clearly recognized by the media and the public despite the impact that improved forecasts have on the national economy and on the lives of every American.In this paper the authors review several measures of operational forecast skill that quantify improvements in NWP at the National Centers for Environmental Prediction (NCEP, formerly the National Meteorological Center) of the National Weather
The use of experimental bending tests to more accurate numerical description of TBC damage process
NASA Astrophysics Data System (ADS)
Sadowski, T.; Golewski, P.
2016-04-01
Thermal barrier coatings (TBCs) have been extensively used in aircraft engines to protect critical engine parts such as blades and combustion chambers, which are exposed to high temperatures and corrosive environment. The blades of turbine engines are additionally exposed to high mechanical loads. These loads are created by the high rotational speed of the rotor (30 000 rot/min), causing the tensile and bending stresses. Therefore, experimental testing of coated samples is necessary in order to determine strength properties of TBCs. Beam samples with dimensions 50×10×2 mm were used in those studies. The TBC system consisted of 150 μm thick bond coat (NiCoCrAlY) and 300 μm thick top coat (YSZ) made by APS (air plasma spray) process. Samples were tested by three-point bending test with various loads. After bending tests, the samples were subjected to microscopic observation to determine the quantity of cracks and their depth. The above mentioned results were used to build numerical model and calibrate material data in Abaqus program. Brittle cracking damage model was applied for the TBC layer, which allows to remove elements after reaching criterion. Surface based cohesive behavior was used to model the delamination which may occur at the boundary between bond coat and top coat.
Theoretical and numerical predictions of hypervelocity impact-generated plasma
Li, Jianqiao; Song, Weidong Ning, Jianguo
2014-08-15
The hypervelocity impact generated plasmas (HVIGP) in thermodynamic non-equilibrium state were theoretically analyzed, and a physical model was presented to explore the relationship between plasma ionization degree and internal energy of the system by a group of equations including a chemical reaction equilibrium equation, a chemical reaction rate equation, and an energy conservation equation. A series of AUTODYN 3D (a widely used software in dynamic numerical simulations and developed by Century Dynamic Inc.) numerical simulations of the impacts of hypervelocity Al projectile on its targets at different incident angles were performed. The internal energy and the material density obtained from the numerical simulations were then used to calculate the ionization degree and the electron temperature. Based on a self-developed 2D smooth particle hydrodynamic (SPH) code and the theoretical model, the plasmas generated by 6 hypervelocity impacts were directly simulated and their total charges were calculated. The numerical results are in good agreements with the experimental results as well as the empirical formulas, demonstrating that the theoretical model is justified by the AUTODYN 3D and self-developed 2D SPH simulations and applicable to predict HVIGPs. The study is of significance for astrophysical and cosmonautic researches and safety.
Evaluating the Impact of Aerosols on Numerical Weather Prediction
NASA Astrophysics Data System (ADS)
Freitas, Saulo; Silva, Arlindo; Benedetti, Angela; Grell, Georg; Members, Wgne; Zarzur, Mauricio
2015-04-01
The Working Group on Numerical Experimentation (WMO, http://www.wmo.int/pages/about/sec/rescrosscut/resdept_wgne.html) has organized an exercise to evaluate the impact of aerosols on NWP. This exercise will involve regional and global models currently used for weather forecast by the operational centers worldwide and aims at addressing the following questions: a) How important are aerosols for predicting the physical system (NWP, seasonal, climate) as distinct from predicting the aerosols themselves? b) How important is atmospheric model quality for air quality forecasting? c) What are the current capabilities of NWP models to simulate aerosol impacts on weather prediction? Toward this goal we have selected 3 strong or persistent events of aerosol pollution worldwide that could be fairly represented in current NWP models and that allowed for an evaluation of the aerosol impact on weather prediction. The selected events includes a strong dust storm that blew off the coast of Libya and over the Mediterranean, an extremely severe episode of air pollution in Beijing and surrounding areas, and an extreme case of biomass burning smoke in Brazil. The experimental design calls for simulations with and without explicitly accounting for aerosol feedbacks in the cloud and radiation parameterizations. In this presentation we will summarize the results of this study focusing on the evaluation of model performance in terms of its ability to faithfully simulate aerosol optical depth, and the assessment of the aerosol impact on the predictions of near surface wind, temperature, humidity, rainfall and the surface energy budget.
An Improved Numerical Integration Method for Springback Predictions
NASA Astrophysics Data System (ADS)
Ibrahim, R.; Smith, L. M.; Golovashchenko, Sergey F.
2011-08-01
In this investigation, the focus is on the springback of steel sheets in V-die air bending. A full replication to a numerical integration algorithm presented rigorously in [1] to predict the springback in air bending was performed and confirmed successfully. Algorithm alteration and extensions were proposed here. The altered approach used in solving the moment equation numerically resulted in springback values much closer to the trend presented by the experimental data, Although investigation here extended to use a more realistic work-hardening model, the differences in the springback values obtained by both hardening models were almost negligible. The algorithm was extended to be applied on thin sheets down to 0.8 mm. Results show that this extension is possible as verified by FEA and other published experiments on TRIP steel sheets.
Numerical geology: Predicting depositional and diagenetic facies from wireline logs using core data
Altunbay, M.; Barr, D.C.; Kennaird, A.F.; Manning, D.K.
1994-12-31
To exploit a reservoir, the geological model must accurately define the depositional environment and the effects of diagenesis on the pore network. Current methods for establishing the geological model of a field usually require subjective, qualitative interpretation of geological and petrophysical data. A method--Numerical Geology--has been developed that greatly reduces the subjectivity in geological modeling efforts. This method also allows geological attributes to be quantified and predicted. Numerical Geology involves the integration of petrophysical, petrological and geological data with wireline log responses. The geology of ``Hydraulic or Flow Units`` intervals with similar hydraulic characteristics is described using conventional sedimentology, petrography and core analysis data. These data are translated into a matrix of geological indices classified according to hydraulic unit profile of the section. Hydraulic units are then predicted for uncored sections based on their unique log signatures that are obtained from cored sections. By combining predicted hydraulic units profile with the matrix of geological indices for each flow unit, profiles of geological attributes are derived. The prediction reliability of hydraulic units is calculated based on the uniqueness of log signatures for each flow unit. Therefore, the confidence level for geological predictions can be assigned to estimated profiles of geological attributes. This eliminates much of the subjectivity from future geological interpretations and predictions.
A factored implicit scheme for numerical weather prediction
NASA Technical Reports Server (NTRS)
Augenbaum, J. M.; Cohn, S. E.; Isaacson, E.; Dee, D. P.; Marchesin, D.
1985-01-01
An implicit method is proposed to factor the nonlinear partial differential equations governing fast and slow modes of dynamic motion in numerical weather prediction schemes. The method permits separate factorization of the slow and fast modes of the implicit operator. A simple two-dimensional version of the system of three-dimensional equations governing atmospheric dynamics over shallow water was analyzed to assess the accuracy of the proposed method. It is shown that the method has a small error which is comparable to other discretization errors in the overall scheme.
Intermolecular potentials and the accurate prediction of the thermodynamic properties of water.
Shvab, I; Sadus, Richard J
2013-11-21
The ability of intermolecular potentials to correctly predict the thermodynamic properties of liquid water at a density of 0.998 g∕cm(3) for a wide range of temperatures (298-650 K) and pressures (0.1-700 MPa) is investigated. Molecular dynamics simulations are reported for the pressure, thermal pressure coefficient, thermal expansion coefficient, isothermal and adiabatic compressibilities, isobaric and isochoric heat capacities, and Joule-Thomson coefficient of liquid water using the non-polarizable SPC∕E and TIP4P∕2005 potentials. The results are compared with both experiment data and results obtained from the ab initio-based Matsuoka-Clementi-Yoshimine non-additive (MCYna) [J. Li, Z. Zhou, and R. J. Sadus, J. Chem. Phys. 127, 154509 (2007)] potential, which includes polarization contributions. The data clearly indicate that both the SPC∕E and TIP4P∕2005 potentials are only in qualitative agreement with experiment, whereas the polarizable MCYna potential predicts some properties within experimental uncertainty. This highlights the importance of polarizability for the accurate prediction of the thermodynamic properties of water, particularly at temperatures beyond 298 K. PMID:24320337
Intermolecular potentials and the accurate prediction of the thermodynamic properties of water
NASA Astrophysics Data System (ADS)
Shvab, I.; Sadus, Richard J.
2013-11-01
The ability of intermolecular potentials to correctly predict the thermodynamic properties of liquid water at a density of 0.998 g/cm3 for a wide range of temperatures (298-650 K) and pressures (0.1-700 MPa) is investigated. Molecular dynamics simulations are reported for the pressure, thermal pressure coefficient, thermal expansion coefficient, isothermal and adiabatic compressibilities, isobaric and isochoric heat capacities, and Joule-Thomson coefficient of liquid water using the non-polarizable SPC/E and TIP4P/2005 potentials. The results are compared with both experiment data and results obtained from the ab initio-based Matsuoka-Clementi-Yoshimine non-additive (MCYna) [J. Li, Z. Zhou, and R. J. Sadus, J. Chem. Phys. 127, 154509 (2007)] potential, which includes polarization contributions. The data clearly indicate that both the SPC/E and TIP4P/2005 potentials are only in qualitative agreement with experiment, whereas the polarizable MCYna potential predicts some properties within experimental uncertainty. This highlights the importance of polarizability for the accurate prediction of the thermodynamic properties of water, particularly at temperatures beyond 298 K.
Toward an Accurate Prediction of the Arrival Time of Geomagnetic-Effective Coronal Mass Ejections
NASA Astrophysics Data System (ADS)
Shi, T.; Wang, Y.; Wan, L.; Cheng, X.; Ding, M.; Zhang, J.
2015-12-01
Accurately predicting the arrival of coronal mass ejections (CMEs) to the Earth based on remote images is of critical significance for the study of space weather. Here we make a statistical study of 21 Earth-directed CMEs, specifically exploring the relationship between CME initial speeds and transit times. The initial speed of a CME is obtained by fitting the CME with the Graduated Cylindrical Shell model and is thus free of projection effects. We then use the drag force model to fit results of the transit time versus the initial speed. By adopting different drag regimes, i.e., the viscous, aerodynamics, and hybrid regimes, we get similar results, with a least mean estimation error of the hybrid model of 12.9 hr. CMEs with a propagation angle (the angle between the propagation direction and the Sun-Earth line) larger than their half-angular widths arrive at the Earth with an angular deviation caused by factors other than the radial solar wind drag. The drag force model cannot be reliably applied to such events. If we exclude these events in the sample, the prediction accuracy can be improved, i.e., the estimation error reduces to 6.8 hr. This work suggests that it is viable to predict the arrival time of CMEs to the Earth based on the initial parameters with fairly good accuracy. Thus, it provides a method of forecasting space weather 1-5 days following the occurrence of CMEs.
Towards first-principles based prediction of highly accurate electrochemical Pourbiax diagrams
NASA Astrophysics Data System (ADS)
Zeng, Zhenhua; Chan, Maria; Greeley, Jeff
2015-03-01
Electrochemical Pourbaix diagrams lie at the heart of aqueous electrochemical processes and are central to the identification of stable phases of metals for processes ranging from electrocatalysis to corrosion. Even though standard DFT calculations are potentially powerful tools for the prediction of such Pourbaix diagrams, inherent errors in the description of strongly-correlated transition metal (hydr)oxides, together with neglect of weak van der Waals (vdW) interactions, has limited the reliability of the predictions for even the simplest bulk systems; corresponding predictions for more complex alloy or surface structures are even more challenging . Through introduction of a Hubbard U correction, employment of a state-of-the-art van der Waals functional, and use of pure water as a reference state for the calculations, these errors are systematically corrected. The strong performance is illustrated on a series of bulk transition metal (Mn, Fe, Co and Ni) hydroxide, oxyhydroxide, binary and ternary oxides where the corresponding thermodynamics of oxidation and reduction can be accurately described with standard errors of less than 0.04 eV in comparison with experiment.
Intermolecular potentials and the accurate prediction of the thermodynamic properties of water
Shvab, I.; Sadus, Richard J.
2013-11-21
The ability of intermolecular potentials to correctly predict the thermodynamic properties of liquid water at a density of 0.998 g/cm{sup 3} for a wide range of temperatures (298–650 K) and pressures (0.1–700 MPa) is investigated. Molecular dynamics simulations are reported for the pressure, thermal pressure coefficient, thermal expansion coefficient, isothermal and adiabatic compressibilities, isobaric and isochoric heat capacities, and Joule-Thomson coefficient of liquid water using the non-polarizable SPC/E and TIP4P/2005 potentials. The results are compared with both experiment data and results obtained from the ab initio-based Matsuoka-Clementi-Yoshimine non-additive (MCYna) [J. Li, Z. Zhou, and R. J. Sadus, J. Chem. Phys. 127, 154509 (2007)] potential, which includes polarization contributions. The data clearly indicate that both the SPC/E and TIP4P/2005 potentials are only in qualitative agreement with experiment, whereas the polarizable MCYna potential predicts some properties within experimental uncertainty. This highlights the importance of polarizability for the accurate prediction of the thermodynamic properties of water, particularly at temperatures beyond 298 K.
Direct Pressure Monitoring Accurately Predicts Pulmonary Vein Occlusion During Cryoballoon Ablation
Kosmidou, Ioanna; Wooden, Shannnon; Jones, Brian; Deering, Thomas; Wickliffe, Andrew; Dan, Dan
2013-01-01
Cryoballoon ablation (CBA) is an established therapy for atrial fibrillation (AF). Pulmonary vein (PV) occlusion is essential for achieving antral contact and PV isolation and is typically assessed by contrast injection. We present a novel method of direct pressure monitoring for assessment of PV occlusion. Transcatheter pressure is monitored during balloon advancement to the PV antrum. Pressure is recorded via a single pressure transducer connected to the inner lumen of the cryoballoon. Pressure curve characteristics are used to assess occlusion in conjunction with fluoroscopic or intracardiac echocardiography (ICE) guidance. PV occlusion is confirmed when loss of typical left atrial (LA) pressure waveform is observed with recordings of PA pressure characteristics (no A wave and rapid V wave upstroke). Complete pulmonary vein occlusion as assessed with this technique has been confirmed with concurrent contrast utilization during the initial testing of the technique and has been shown to be highly accurate and readily reproducible. We evaluated the efficacy of this novel technique in 35 patients. A total of 128 veins were assessed for occlusion with the cryoballoon utilizing the pressure monitoring technique; occlusive pressure was demonstrated in 113 veins with resultant successful pulmonary vein isolation in 111 veins (98.2%). Occlusion was confirmed with subsequent contrast injection during the initial ten procedures, after which contrast utilization was rapidly reduced or eliminated given the highly accurate identification of occlusive pressure waveform with limited initial training. Verification of PV occlusive pressure during CBA is a novel approach to assessing effective PV occlusion and it accurately predicts electrical isolation. Utilization of this method results in significant decrease in fluoroscopy time and volume of contrast. PMID:23485956
Direct pressure monitoring accurately predicts pulmonary vein occlusion during cryoballoon ablation.
Kosmidou, Ioanna; Wooden, Shannnon; Jones, Brian; Deering, Thomas; Wickliffe, Andrew; Dan, Dan
2013-01-01
Cryoballoon ablation (CBA) is an established therapy for atrial fibrillation (AF). Pulmonary vein (PV) occlusion is essential for achieving antral contact and PV isolation and is typically assessed by contrast injection. We present a novel method of direct pressure monitoring for assessment of PV occlusion. Transcatheter pressure is monitored during balloon advancement to the PV antrum. Pressure is recorded via a single pressure transducer connected to the inner lumen of the cryoballoon. Pressure curve characteristics are used to assess occlusion in conjunction with fluoroscopic or intracardiac echocardiography (ICE) guidance. PV occlusion is confirmed when loss of typical left atrial (LA) pressure waveform is observed with recordings of PA pressure characteristics (no A wave and rapid V wave upstroke). Complete pulmonary vein occlusion as assessed with this technique has been confirmed with concurrent contrast utilization during the initial testing of the technique and has been shown to be highly accurate and readily reproducible. We evaluated the efficacy of this novel technique in 35 patients. A total of 128 veins were assessed for occlusion with the cryoballoon utilizing the pressure monitoring technique; occlusive pressure was demonstrated in 113 veins with resultant successful pulmonary vein isolation in 111 veins (98.2%). Occlusion was confirmed with subsequent contrast injection during the initial ten procedures, after which contrast utilization was rapidly reduced or eliminated given the highly accurate identification of occlusive pressure waveform with limited initial training. Verification of PV occlusive pressure during CBA is a novel approach to assessing effective PV occlusion and it accurately predicts electrical isolation. Utilization of this method results in significant decrease in fluoroscopy time and volume of contrast. PMID:23485956
A fast and accurate method to predict 2D and 3D aerodynamic boundary layer flows
NASA Astrophysics Data System (ADS)
Bijleveld, H. A.; Veldman, A. E. P.
2014-12-01
A quasi-simultaneous interaction method is applied to predict 2D and 3D aerodynamic flows. This method is suitable for offshore wind turbine design software as it is a very accurate and computationally reasonably cheap method. This study shows the results for a NACA 0012 airfoil. The two applied solvers converge to the experimental values when the grid is refined. We also show that in separation the eigenvalues remain positive thus avoiding the Goldstein singularity at separation. In 3D we show a flow over a dent in which separation occurs. A rotating flat plat is used to show the applicability of the method for rotating flows. The shown capabilities of the method indicate that the quasi-simultaneous interaction method is suitable for design methods for offshore wind turbine blades.
NASA Astrophysics Data System (ADS)
Zacharias, Panagiotis P.; Chatzineofytou, Elpida G.; Spantideas, Sotirios T.; Capsalis, Christos N.
2016-07-01
In the present work, the determination of the magnetic behavior of localized magnetic sources from near-field measurements is examined. The distance power law of the magnetic field fall-off is used in various cases to accurately predict the magnetic signature of an equipment under test (EUT) consisting of multiple alternating current (AC) magnetic sources. Therefore, parameters concerning the location of the observation points (magnetometers) are studied towards this scope. The results clearly show that these parameters are independent of the EUT's size and layout. Additionally, the techniques developed in the present study enable the placing of the magnetometers close to the EUT, thus achieving high signal-to-noise ratio (SNR). Finally, the proposed method is verified by real measurements, using a mobile phone as an EUT.
Measuring solar reflectance Part I: Defining a metric that accurately predicts solar heat gain
Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul
2010-05-14
Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective 'cool colored' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland U.S. latitudes, this metric RE891BN can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {le} 5:12 [23{sup o}]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool-roof net energy savings by as much as 23%. We define clear-sky air mass one global horizontal ('AM1GH') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer.
Measuring solar reflectance - Part I: Defining a metric that accurately predicts solar heat gain
Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul
2010-09-15
Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective ''cool colored'' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland US latitudes, this metric R{sub E891BN} can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {<=} 5:12 [23 ]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool roof net energy savings by as much as 23%. We define clear sky air mass one global horizontal (''AM1GH'') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer. (author)
Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru
2016-01-01
Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research. PMID:27571061
Dal Moro, F; Abate, A; Lanckriet, G R G; Arandjelovic, G; Gasparella, P; Bassi, P; Mancini, M; Pagano, F
2006-01-01
The objective of this study was to optimally predict the spontaneous passage of ureteral stones in patients with renal colic by applying for the first time support vector machines (SVM), an instance of kernel methods, for classification. After reviewing the results found in the literature, we compared the performances obtained with logistic regression (LR) and accurately trained artificial neural networks (ANN) to those obtained with SVM, that is, the standard SVM, and the linear programming SVM (LP-SVM); the latter techniques show an improved performance. Moreover, we rank the prediction factors according to their importance using Fisher scores and the LP-SVM feature weights. A data set of 1163 patients affected by renal colic has been analyzed and restricted to single out a statistically coherent subset of 402 patients. Nine clinical factors are used as inputs for the classification algorithms, to predict one binary output. The algorithms are cross-validated by training and testing on randomly selected train- and test-set partitions of the data and reporting the average performance on the test sets. The SVM-based approaches obtained a sensitivity of 84.5% and a specificity of 86.9%. The feature ranking based on LP-SVM gives the highest importance to stone size, stone position and symptom duration before check-up. We propose a statistically correct way of employing LR, ANN and SVM for the prediction of spontaneous passage of ureteral stones in patients with renal colic. SVM outperformed ANN, as well as LR. This study will soon be translated into a practical software toolbox for actual clinical usage. PMID:16374437
Parameterization of mires in a numerical weather prediction model
NASA Astrophysics Data System (ADS)
Yurova, Alla; Tolstykh, Mikhail; Nilsson, Mats; Sirin, Andrey
2014-11-01
Mires (peat-accumulating wetlands) occupy 8.1% of Russian territory and are especially numerous in the western Siberian Lowlands, where they can significantly modify atmospheric heat and water balances. They also influence air temperatures and humidity in the boundary layers closest to the earth's surface. The purpose of our study was to incorporate the influence of mires into the SL-AV numerical weather prediction model, which is used operationally in the Hydrometeorological Center of Russia. This was done by adjusting the multilayer soil component (by modifying the peat thermal conductivity in the heat diffusion equation and reformulating the lower boundary condition for Richard's equation), and reformulating both the evapotranspiration and runoff from mires. When evaporation from mires was incorporated into the SL-AV model, the latent heat flux in the areas dominated by mires increased strongly, resulting in surface cooling and hence reductions in the sensible heat flux and outgoing terrestrial long-wave radiation. Presented results show that including mires significantly decreased the bias and RMSE of predictions of temperature and relative humidity 2 m above the ground for lead times of 12, 36, and 60 h from 00 h Coordinated Universal Time (evening conditions), but did not eliminate the bias in forecasts for lead times of 24, 48, and 72 h (morning conditions) in Siberia. Different parameterizations of mire evapotranspiration are also compared.
Impact of Quikscat Data on Numerical Weather Prediction
NASA Technical Reports Server (NTRS)
Atlas, Robert
2002-01-01
One of the important applications of satellite surface wind observations is to increase the accuracy of weather analyses and forecasts. Satellite surface wind data can improve numerical weather prediction (NWP) model forecasts by contributing to improved analyses of the surface wind field and air sea fluxes. Through the data assimilation process,these data can also improve atmospheric mass and motion fields in the free atmosphere above the surface. The SeaWinds scatterometer on the QuikScat satellite was launched in July 1999 and represented a dramatic departure in design from the other scatterometer instruments launched during the past decade (ERS-1,2 and NSCAT). The NASA Data Assimilation Office (DAO) was the first data assimilation center to assimilate QuikScat Seawinds data and evaluate their impact on numerical weather prediction. Following the launch of QuikScat, a detailed evaluation of the initial surface wind data sets was performed as part of a collaborative project between the Environmental Modeling Center of NCEP, NESDIS and the DAO. More recently, the impact of Quikscat data was evaluated in detailed experiments using the NCEP operational data assimilation system. As a result of the beneficial impact obtained, NCEP began operational utilization of Quikscat data. Results from these experiments as well as recent DAO assimilation experiments showing the impact of Quikscat data on stratospheric analyses and forecasts will be presented at the meeting.
Bangalore, Sai Santosh; Wang, Jelai; Allison, David B.
2009-01-01
In the fields of genomics and high dimensional biology (HDB), massive multiple testing prompts the use of extremely small significance levels. Because tail areas of statistical distributions are needed for hypothesis testing, the accuracy of these areas is important to confidently make scientific judgments. Previous work on accuracy was primarily focused on evaluating professionally written statistical software, like SAS, on the Statistical Reference Datasets (StRD) provided by National Institute of Standards and Technology (NIST) and on the accuracy of tail areas in statistical distributions. The goal of this paper is to provide guidance to investigators, who are developing their own custom scientific software built upon numerical libraries written by others. In specific, we evaluate the accuracy of small tail areas from cumulative distribution functions (CDF) of the Chi-square and t-distribution by comparing several open-source, free, or commercially licensed numerical libraries in Java, C, and R to widely accepted standards of comparison like ELV and DCDFLIB. In our evaluation, the C libraries and R functions are consistently accurate up to six significant digits. Amongst the evaluated Java libraries, Colt is most accurate. These languages and libraries are popular choices among programmers developing scientific software, so the results herein can be useful to programmers in choosing libraries for CDF accuracy. PMID:20161126
An operational phenological model for numerical pollen prediction
NASA Astrophysics Data System (ADS)
Scheifinger, Helfried
2010-05-01
The general prevalence of seasonal allergic rhinitis is estimated to be about 15% in Europe, and still increasing. Pre-emptive measures require both the reliable assessment of production and release of various pollen species and the forecasting of their atmospheric dispersion. For this purpose numerical pollen prediction schemes are being developed by a number of European weather services in order to supplement and improve the qualitative pollen prediction systems by state of the art instruments. Pollen emission is spatially and temporally highly variable throughout the vegetation period and not directly observed, which precludes a straightforward application of dispersion models to simulate pollen transport. Even the beginning and end of flowering, which indicates the time period of potential pollen emission, is not (yet) available in real time. One way to create a proxy for the beginning, the course and the end of the pollen emission is its simulation as function of real time temperature observations. In this work the European phenological data set of the COST725 initiative forms the basis of modelling the beginning of flowering of 15 species, some of which emit allergic pollen. In order to keep the problem as simple as possible for the sake of spatial interpolation, a 3 parameter temperature sum model was implemented in a real time operational procedure, which calculates the spatial distribution of the entry dates for the current day and 24, 48 and 72 hours in advance. As stand alone phenological model and combined with back trajectories it is thought to support the qualitative pollen prediction scheme at the Austrian national weather service. Apart from that it is planned to incorporate it in a numerical pollen dispersion model. More details, open questions and first results of the operation phenological model will be discussed and presented.
ChIP-seq Accurately Predicts Tissue-Specific Activity of Enhancers
Visel, Axel; Blow, Matthew J.; Li, Zirong; Zhang, Tao; Akiyama, Jennifer A.; Holt, Amy; Plajzer-Frick, Ingrid; Shoukry, Malak; Wright, Crystal; Chen, Feng; Afzal, Veena; Ren, Bing; Rubin, Edward M.; Pennacchio, Len A.
2009-02-01
A major yet unresolved quest in decoding the human genome is the identification of the regulatory sequences that control the spatial and temporal expression of genes. Distant-acting transcriptional enhancers are particularly challenging to uncover since they are scattered amongst the vast non-coding portion of the genome. Evolutionary sequence constraint can facilitate the discovery of enhancers, but fails to predict when and where they are active in vivo. Here, we performed chromatin immunoprecipitation with the enhancer-associated protein p300, followed by massively-parallel sequencing, to map several thousand in vivo binding sites of p300 in mouse embryonic forebrain, midbrain, and limb tissue. We tested 86 of these sequences in a transgenic mouse assay, which in nearly all cases revealed reproducible enhancer activity in those tissues predicted by p300 binding. Our results indicate that in vivo mapping of p300 binding is a highly accurate means for identifying enhancers and their associated activities and suggest that such datasets will be useful to study the role of tissue-specific enhancers in human biology and disease on a genome-wide scale.
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina
Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish
2016-01-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143
Can CO2 assimilation in maize leaves be predicted accurately from chlorophyll fluorescence analysis?
Edwards, G E; Baker, N R
1993-08-01
Analysis is made of the energetics of CO2 fixation, the photochemical quantum requirement per CO2 fixed, and sinks for utilising reductive power in the C4 plant maize. CO2 assimilation is the primary sink for energy derived from photochemistry, whereas photorespiration and nitrogen assimilation are relatively small sinks, particularly in developed leaves. Measurement of O2 exchange by mass spectrometry and CO2 exchange by infrared gas analysis under varying levels of CO2 indicate that there is a very close relationship between the true rate of O2 evolution from PS II and the net rate of CO2 fixation. Consideration is given to measurements of the quantum yields of PS II (φ PS II) from fluorescence analysis and of CO2 assimilation ([Formula: see text]) in maize over a wide range of conditions. The[Formula: see text] ratio was found to remain reasonably constant (ca. 12) over a range of physiological conditions in developed leaves, with varying temperature, CO2 concentrations, light intensities (from 5% to 100% of full sunlight), and following photoinhibition under high light and low temperature. A simple model for predicting CO2 assimilation from fluorescence parameters is presented and evaluated. It is concluded that under a wide range of conditions fluorescence parameters can be used to predict accurately and rapidly CO2 assimilation rates in maize. PMID:24317706
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.
Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish
2016-04-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143
Young, Jonathan; Modat, Marc; Cardoso, Manuel J.; Mendelson, Alex; Cash, Dave; Ourselin, Sebastien
2013-01-01
Accurately identifying the patients that have mild cognitive impairment (MCI) who will go on to develop Alzheimer's disease (AD) will become essential as new treatments will require identification of AD patients at earlier stages in the disease process. Most previous work in this area has centred around the same automated techniques used to diagnose AD patients from healthy controls, by coupling high dimensional brain image data or other relevant biomarker data to modern machine learning techniques. Such studies can now distinguish between AD patients and controls as accurately as an experienced clinician. Models trained on patients with AD and control subjects can also distinguish between MCI patients that will convert to AD within a given timeframe (MCI-c) and those that remain stable (MCI-s), although differences between these groups are smaller and thus, the corresponding accuracy is lower. The most common type of classifier used in these studies is the support vector machine, which gives categorical class decisions. In this paper, we introduce Gaussian process (GP) classification to the problem. This fully Bayesian method produces naturally probabilistic predictions, which we show correlate well with the actual chances of converting to AD within 3 years in a population of 96 MCI-s and 47 MCI-c subjects. Furthermore, we show that GPs can integrate multimodal data (in this study volumetric MRI, FDG-PET, cerebrospinal fluid, and APOE genotype with the classification process through the use of a mixed kernel). The GP approach aids combination of different data sources by learning parameters automatically from training data via type-II maximum likelihood, which we compare to a more conventional method based on cross validation and an SVM classifier. When the resulting probabilities from the GP are dichotomised to produce a binary classification, the results for predicting MCI conversion based on the combination of all three types of data show a balanced accuracy
Accurate and Robust Genomic Prediction of Celiac Disease Using Statistical Learning
Abraham, Gad; Tye-Din, Jason A.; Bhalala, Oneil G.; Kowalczyk, Adam; Zobel, Justin; Inouye, Michael
2014-01-01
Practical application of genomic-based risk stratification to clinical diagnosis is appealing yet performance varies widely depending on the disease and genomic risk score (GRS) method. Celiac disease (CD), a common immune-mediated illness, is strongly genetically determined and requires specific HLA haplotypes. HLA testing can exclude diagnosis but has low specificity, providing little information suitable for clinical risk stratification. Using six European cohorts, we provide a proof-of-concept that statistical learning approaches which simultaneously model all SNPs can generate robust and highly accurate predictive models of CD based on genome-wide SNP profiles. The high predictive capacity replicated both in cross-validation within each cohort (AUC of 0.87–0.89) and in independent replication across cohorts (AUC of 0.86–0.9), despite differences in ethnicity. The models explained 30–35% of disease variance and up to ∼43% of heritability. The GRS's utility was assessed in different clinically relevant settings. Comparable to HLA typing, the GRS can be used to identify individuals without CD with ≥99.6% negative predictive value however, unlike HLA typing, fine-scale stratification of individuals into categories of higher-risk for CD can identify those that would benefit from more invasive and costly definitive testing. The GRS is flexible and its performance can be adapted to the clinical situation by adjusting the threshold cut-off. Despite explaining a minority of disease heritability, our findings indicate a genomic risk score provides clinically relevant information to improve upon current diagnostic pathways for CD and support further studies evaluating the clinical utility of this approach in CD and other complex diseases. PMID:24550740
NASA Astrophysics Data System (ADS)
Stecca, Guglielmo; Siviglia, Annunziato; Blom, Astrid
2016-07-01
We present an accurate numerical approximation to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in one space dimension. Our solution procedure originates from the fully-unsteady matrix-vector formulation developed in [54]. The principal part of the problem is solved by an explicit Finite Volume upwind method of the path-conservative type, by which all the variables are updated simultaneously in a coupled fashion. The solution to the principal part is embedded into a splitting procedure for the treatment of frictional source terms. The numerical scheme is extended to second-order accuracy and includes a bookkeeping procedure for handling the evolution of size stratification in the substrate. We develop a concept of balancedness for the vertical mass flux between the substrate and active layer under bed degradation, which prevents the occurrence of non-physical oscillations in the grainsize distribution of the substrate. We suitably modify the numerical scheme to respect this principle. We finally verify the accuracy in our solution to the equations, and its ability to reproduce one-dimensional morphodynamics due to streamwise and vertical sorting, using three test cases. In detail, (i) we empirically assess the balancedness of vertical mass fluxes under degradation; (ii) we study the convergence to the analytical linearised solution for the propagation of infinitesimal-amplitude waves [54], which is here employed for the first time to assess a mixed-sediment model; (iii) we reproduce Ribberink's E8-E9 flume experiment [46].
NASA Astrophysics Data System (ADS)
Jiang, Shidong; Luo, Li-Shi
2016-07-01
The integral equation for the flow velocity u (x ; k) in the steady Couette flow derived from the linearized Bhatnagar-Gross-Krook-Welander kinetic equation is studied in detail both theoretically and numerically in a wide range of the Knudsen number k between 0.003 and 100.0. First, it is shown that the integral equation is a Fredholm equation of the second kind in which the norm of the compact integral operator is less than 1 on Lp for any 1 ≤ p ≤ ∞ and thus there exists a unique solution to the integral equation via the Neumann series. Second, it is shown that the solution is logarithmically singular at the endpoints. More precisely, if x = 0 is an endpoint, then the solution can be expanded as a double power series of the form ∑n=0∞∑m=0∞cn,mxn(xln x) m about x = 0 on a small interval x ∈ (0 , a) for some a > 0. And third, a high-order adaptive numerical algorithm is designed to compute the solution numerically to high precision. The solutions for the flow velocity u (x ; k), the stress Pxy (k), and the half-channel mass flow rate Q (k) are obtained in a wide range of the Knudsen number 0.003 ≤ k ≤ 100.0; and these solutions are accurate for at least twelve significant digits or better, thus they can be used as benchmark solutions.
Wong, Sharon; Back, Michael; Tan, Poh Wee; Lee, Khai Mun; Baggarley, Shaun; Lu, Jaide Jay
2012-07-01
Skin doses have been an important factor in the dose prescription for breast radiotherapy. Recent advances in radiotherapy treatment techniques, such as intensity-modulated radiation therapy (IMRT) and new treatment schemes such as hypofractionated breast therapy have made the precise determination of the surface dose necessary. Detailed information of the dose at various depths of the skin is also critical in designing new treatment strategies. The purpose of this work was to assess the accuracy of surface dose calculation by a clinically used treatment planning system and those measured by thermoluminescence dosimeters (TLDs) in a customized chest wall phantom. This study involved the construction of a chest wall phantom for skin dose assessment. Seven TLDs were distributed throughout each right chest wall phantom to give adequate representation of measured radiation doses. Point doses from the CMS Xio Registered-Sign treatment planning system (TPS) were calculated for each relevant TLD positions and results correlated. There were no significant difference between measured absorbed dose by TLD and calculated doses by the TPS (p > 0.05 (1-tailed). Dose accuracy of up to 2.21% was found. The deviations from the calculated absorbed doses were overall larger (3.4%) when wedges and bolus were used. 3D radiotherapy TPS is a useful and accurate tool to assess the accuracy of surface dose. Our studies have shown that radiation treatment accuracy expressed as a comparison between calculated doses (by TPS) and measured doses (by TLD dosimetry) can be accurately predicted for tangential treatment of the chest wall after mastectomy.
Numerical Prediction of SERN Performance using WIND code
NASA Technical Reports Server (NTRS)
Engblom, W. A.
2003-01-01
Computational results are presented for the performance and flow behavior of single-expansion ramp nozzles (SERNs) during overexpanded operation and transonic flight. Three-dimensional Reynolds-Averaged Navier Stokes (RANS) results are obtained for two vehicle configurations, including the NASP Model 5B and ISTAR RBCC (a variant of X-43B) using the WIND code. Numerical predictions for nozzle integrated forces and pitch moments are directly compared to experimental data for the NASP Model 5B, and adequate-to-excellent agreement is found. The sensitivity of SERN performance and separation phenomena to freestream static pressure and Mach number is demonstrated via a matrix of cases for both vehicles. 3-D separation regions are shown to be induced by either lateral (e.g., sidewall) shocks or vertical (e.g., cowl trailing edge) shocks. Finally, the implications of this work to future preliminary design efforts involving SERNs are discussed.
More accurate predictions with transonic Navier-Stokes methods through improved turbulence modeling
NASA Technical Reports Server (NTRS)
Johnson, Dennis A.
1989-01-01
Significant improvements in predictive accuracies for off-design conditions are achievable through better turbulence modeling; and, without necessarily adding any significant complication to the numerics. One well established fact about turbulence is it is slow to respond to changes in the mean strain field. With the 'equilibrium' algebraic turbulence models no attempt is made to model this characteristic and as a consequence these turbulence models exaggerate the turbulent boundary layer's ability to produce turbulent Reynolds shear stresses in regions of adverse pressure gradient. As a consequence, too little momentum loss within the boundary layer is predicted in the region of the shock wave and along the aft part of the airfoil where the surface pressure undergoes further increases. Recently, a 'nonequilibrium' algebraic turbulence model was formulated which attempts to capture this important characteristic of turbulence. This 'nonequilibrium' algebraic model employs an ordinary differential equation to model the slow response of the turbulence to changes in local flow conditions. In its original form, there was some question as to whether this 'nonequilibrium' model performed as well as the 'equilibrium' models for weak interaction cases. However, this turbulence model has since been further improved wherein it now appears that this turbulence model performs at least as well as the 'equilibrium' models for weak interaction cases and for strong interaction cases represents a very significant improvement. The performance of this turbulence model relative to popular 'equilibrium' models is illustrated for three airfoil test cases of the 1987 AIAA Viscous Transonic Airfoil Workshop, Reno, Nevada. A form of this 'nonequilibrium' turbulence model is currently being applied to wing flows for which similar improvements in predictive accuracy are being realized.
Fromer, Menachem; Yanover, Chen
2009-05-15
The task of engineering a protein to assume a target three-dimensional structure is known as protein design. Computational search algorithms are devised to predict a minimal energy amino acid sequence for a particular structure. In practice, however, an ensemble of low-energy sequences is often sought. Primarily, this is performed because an individual predicted low-energy sequence may not necessarily fold to the target structure because of both inaccuracies in modeling protein energetics and the nonoptimal nature of search algorithms employed. Additionally, some low-energy sequences may be overly stable and thus lack the dynamic flexibility required for biological functionality. Furthermore, the investigation of low-energy sequence ensembles will provide crucial insights into the pseudo-physical energy force fields that have been derived to describe structural energetics for protein design. Significantly, numerous studies have predicted low-energy sequences, which were subsequently synthesized and demonstrated to fold to desired structures. However, the characterization of the sequence space defined by such energy functions as compatible with a target structure has not been performed in full detail. This issue is critical for protein design scientists to successfully continue using these force fields at an ever-increasing pace and scale. In this paper, we present a conceptually novel algorithm that rapidly predicts the set of lowest energy sequences for a given structure. Based on the theory of probabilistic graphical models, it performs efficient inspection and partitioning of the near-optimal sequence space, without making any assumptions of positional independence. We benchmark its performance on a diverse set of relevant protein design examples and show that it consistently yields sequences of lower energy than those derived from state-of-the-art techniques. Thus, we find that previously presented search techniques do not fully depict the low-energy space as
Laser Hardening Prediction Tool Based On a Solid State Transformations Numerical Model
Martinez, S.; Ukar, E.; Lamikiz, A.
2011-01-17
This paper presents a tool to predict hardening layer in selective laser hardening processes where laser beam heats the part locally while the bulk acts as a heat sink.The tool to predict accurately the temperature field in the workpiece is a numerical model that combines a three dimensional transient numerical solution for heating where is possible to introduce different laser sources. The thermal field was modeled using a kinetic model based on Johnson-Mehl-Avrami equation. Considering this equation, an experimental adjustment of transformation parameters was carried out to get the heating transformation diagrams (CHT). With the temperature field and CHT diagrams the model predicts the percentage of base material converted into austenite. These two parameters are used as first step to estimate the depth of hardened layer in the part.The model has been adjusted and validated with experimental data for DIN 1.2379, cold work tool steel typically used in mold and die making industry. This steel presents solid state diffusive transformations at relative low temperature. These transformations must be considered in order to get good accuracy of temperature field prediction during heating phase. For model validation, surface temperature measured by pyrometry, thermal field as well as the hardened layer obtained from metallographic study, were compared with the model data showing a good adjustment.
Wong, Florence; O’Leary, Jacqueline G; Reddy, K Rajender; Patton, Heather; Kamath, Patrick S; Fallon, Michael B; Garcia-Tsao, Guadalupe; Subramanian, Ram M.; Malik, Raza; Maliakkal, Benedict; Thacker, Leroy R; Bajaj, Jasmohan S
2015-01-01
Background & Aims A consensus conference proposed that cirrhosis-associated acute kidney injury (AKI) be defined as an increase in serum creatinine by >50% from the stable baseline value in <6 months or by ≥0.3mg/dL in <48 hrs. We prospectively evaluated the ability of these criteria to predict mortality within 30 days among hospitalized patients with cirrhosis and infection. Methods 337 patients with cirrhosis admitted with or developed an infection in hospital (56% men; 56±10 y old; model for end-stage liver disease score, 20±8) were followed. We compared data on 30-day mortality, hospital length-of-stay, and organ failure between patients with and without AKI. Results 166 (49%) developed AKI during hospitalization, based on the consensus criteria. Patients who developed AKI had higher admission Child-Pugh (11.0±2.1 vs 9.6±2.1; P<.0001), and MELD scores (23±8 vs17±7; P<.0001), and lower mean arterial pressure (81±16mmHg vs 85±15mmHg; P<.01) than those who did not. Also higher amongst patients with AKI were mortality in ≤30 days (34% vs 7%), intensive care unit transfer (46% vs 20%), ventilation requirement (27% vs 6%), and shock (31% vs 8%); AKI patients also had longer hospital stays (17.8±19.8 days vs 13.3±31.8 days) (all P<.001). 56% of AKI episodes were transient, 28% persistent, and 16% resulted in dialysis. Mortality was 80% among those without renal recovery, higher compared to partial (40%) or complete recovery (15%), or AKI-free patients (7%; P<.0001). Conclusions 30-day mortality is 10-fold higher among infected hospitalized cirrhotic patients with irreversible AKI than those without AKI. The consensus definition of AKI accurately predicts 30-day mortality, length of hospital stay, and organ failure. PMID:23999172
Numerical predictions of hemodynamics following surgeries in cerebral aneurysms
NASA Astrophysics Data System (ADS)
Rayz, Vitaliy; Lawton, Michael; Boussel, Loic; Leach, Joseph; Acevedo, Gabriel; Halbach, Van; Saloner, David
2014-11-01
Large cerebral aneurysms present a danger of rupture or brain compression. In some cases, clinicians may attempt to change the pathological hemodynamics in order to inhibit disease progression. This can be achieved by changing the vascular geometry with an open surgery or by deploying a stent-like flow diverter device. Patient-specific CFD models can help evaluate treatment options by predicting flow regions that are likely to become occupied by thrombus (clot) following the procedure. In this study, alternative flow scenarios were modeled for several patients who underwent surgical treatment. Patient-specific geometries and flow boundary conditions were obtained from magnetic resonance angiography and velocimetry data. The Navier-Stokes equations were solved with a finite volume solver Fluent. A porous media approach was used to model flow-diverter devices. The advection-diffusion equation was solved in order to simulate contrast agent transport and the results were used to evaluate flow residence time changes. Thrombus layering was predicted in regions characterized by reduced velocities and shear stresses as well as increased flow residence time. The simulations indicated surgical options that could result in occlusion of vital arteries with thrombus. Numerical results were compared to experimental and clinical MRI data. The results demonstrate that image-based CFD models may help improve the outcome of surgeries in cerebral aneurysms. acknowledge R01HL115267.
Numerical prediction of rail roughness growth on tangent railway tracks
NASA Astrophysics Data System (ADS)
Nielsen, J. C. O.
2003-10-01
Growth of railhead roughness (irregularities, waviness) is predicted through numerical simulation of dynamic train-track interaction on tangent track. The hypothesis is that wear is caused by longitudinal slip due to driven wheelsets, and that wear is proportional to the longitudinal frictional power in the contact patch. Emanating from an initial roughness spectrum corresponding to a new or a recent ground rail, an initial roughness profile is determined. Wheel-rail contact forces, creepages and wear for one wheelset passage are calculated in relation to location along a discretely supported track model. The calculated wear is scaled by a chosen number of wheelset passages, and is then added to the initial roughness profile. Field observations of rail corrugation on a Dutch track are used to validate the simulation model. Results from the simulations predict a large roughness growth rate for wavelengths around 30-40 mm. The large growth in this wavelength interval is explained by a low track receptance near the sleepers around the pinned-pinned resonance frequency, in combination with a large number of driven passenger wheelset passages at uniform speed. The agreement between simulations and field measurements is good with respect to dominating roughness wavelength and annual wear rate. Remedies for reducing roughness growth are discussed.
Numerical prediction on the dispersion of pollutant particles
NASA Astrophysics Data System (ADS)
Osman, Kahar; Ali, Zairi; Ubaidullah, S.; Zahid, M. N.
2012-06-01
The increasing concern on air pollution has led people around the world to find more efficient ways to control the problem. Air dispersion modeling is proven to be one of the alternatives that provide economical ways to control the growing threat of air pollution. The objective of this research is to develop a practical numerical algorithm to predict the dispersion of pollutant particles around a specific source of emission. The source selected was a rubber wood manufacturing plant. Gaussian-plume model were used as air dispersion model due to its simplicity and generic application. Results of this study show the concentrations of the pollutant particles on ground level reached approximately 90μg/m3, compared with other software. This value surpasses the limit of 50μg/m3 stipulated by the National Ambient Air Quality Standard (NAAQS) and Recommended Malaysian Guidelines (RMG) set by Environment Department of Malaysia. The results also show high concentration of pollutant particles reading during dru seasons as compared to that of rainy seasons. In general, the developed algorithm is proven to be able to predict particles distribution around emitted source with acceptable accuracy.
How accurately can we predict the melting points of drug-like compounds?
Tetko, Igor V; Sushko, Yurii; Novotarskyi, Sergii; Patiny, Luc; Kondratov, Ivan; Petrenko, Alexander E; Charochkina, Larisa; Asiri, Abdullah M
2014-12-22
This article contributes a highly accurate model for predicting the melting points (MPs) of medicinal chemistry compounds. The model was developed using the largest published data set, comprising more than 47k compounds. The distributions of MPs in drug-like and drug lead sets showed that >90% of molecules melt within [50,250]°C. The final model calculated an RMSE of less than 33 °C for molecules from this temperature interval, which is the most important for medicinal chemistry users. This performance was achieved using a consensus model that performed calculations to a significantly higher accuracy than the individual models. We found that compounds with reactive and unstable groups were overrepresented among outlying compounds. These compounds could decompose during storage or measurement, thus introducing experimental errors. While filtering the data by removing outliers generally increased the accuracy of individual models, it did not significantly affect the results of the consensus models. Three analyzed distance to models did not allow us to flag molecules, which had MP values fell outside the applicability domain of the model. We believe that this negative result and the public availability of data from this article will encourage future studies to develop better approaches to define the applicability domain of models. The final model, MP data, and identified reactive groups are available online at http://ochem.eu/article/55638. PMID:25489863
How Accurately Can We Predict the Melting Points of Drug-like Compounds?
2014-01-01
This article contributes a highly accurate model for predicting the melting points (MPs) of medicinal chemistry compounds. The model was developed using the largest published data set, comprising more than 47k compounds. The distributions of MPs in drug-like and drug lead sets showed that >90% of molecules melt within [50,250]°C. The final model calculated an RMSE of less than 33 °C for molecules from this temperature interval, which is the most important for medicinal chemistry users. This performance was achieved using a consensus model that performed calculations to a significantly higher accuracy than the individual models. We found that compounds with reactive and unstable groups were overrepresented among outlying compounds. These compounds could decompose during storage or measurement, thus introducing experimental errors. While filtering the data by removing outliers generally increased the accuracy of individual models, it did not significantly affect the results of the consensus models. Three analyzed distance to models did not allow us to flag molecules, which had MP values fell outside the applicability domain of the model. We believe that this negative result and the public availability of data from this article will encourage future studies to develop better approaches to define the applicability domain of models. The final model, MP data, and identified reactive groups are available online at http://ochem.eu/article/55638. PMID:25489863
Imbalanced land surface water budgets in a numerical weather prediction system
NASA Astrophysics Data System (ADS)
Kauffeldt, Anna; Halldin, Sven; Pappenberger, Florian; Wetterhall, Fredrik; Xu, Chong-Yu; Cloke, Hannah L.
2015-06-01
There has been a significant increase in the skill and resolution of numerical weather prediction models (NWPs) in recent decades, extending the time scales of useful weather predictions. The land surface models (LSMs) of NWPs are often employed in hydrological applications, which raises the question of how hydrologically representative LSMs really are. In this paper, precipitation (P), evaporation (E), and runoff (R) from the European Centre for Medium-Range Weather Forecasts global models were evaluated against observational products. The forecasts differ substantially from observed data for key hydrological variables. In addition, imbalanced surface water budgets, mostly caused by data assimilation, were found on both global (P-E) and basin scales (P-E-R), with the latter being more important. Modeled surface fluxes should be used with care in hydrological applications, and further improvement in LSMs in terms of process descriptions, resolution, and estimation of uncertainties is needed to accurately describe the land surface water budgets.
Accurate prediction of V1 location from cortical folds in a surface coordinate system
Hinds, Oliver P.; Rajendran, Niranjini; Polimeni, Jonathan R.; Augustinack, Jean C.; Wiggins, Graham; Wald, Lawrence L.; Rosas, H. Diana; Potthast, Andreas; Schwartz, Eric L.; Fischl, Bruce
2008-01-01
Previous studies demonstrated substantial variability of the location of primary visual cortex (V1) in stereotaxic coordinates when linear volume-based registration is used to match volumetric image intensities (Amunts et al., 2000). However, other qualitative reports of V1 location (Smith, 1904; Stensaas et al., 1974; Rademacher et al., 1993) suggested a consistent relationship between V1 and the surrounding cortical folds. Here, the relationship between folds and the location of V1 is quantified using surface-based analysis to generate a probabilistic atlas of human V1. High-resolution (about 200 μm) magnetic resonance imaging (MRI) at 7 T of ex vivo human cerebral hemispheres allowed identification of the full area via the stria of Gennari: a myeloarchitectonic feature specific to V1. Separate, whole-brain scans were acquired using MRI at 1.5 T to allow segmentation and mesh reconstruction of the cortical gray matter. For each individual, V1 was manually identified in the high-resolution volume and projected onto the cortical surface. Surface-based intersubject registration (Fischl et al., 1999b) was performed to align the primary cortical folds of individual hemispheres to those of a reference template representing the average folding pattern. An atlas of V1 location was constructed by computing the probability of V1 inclusion for each cortical location in the template space. This probabilistic atlas of V1 exhibits low prediction error compared to previous V1 probabilistic atlases built in volumetric coordinates. The increased predictability observed under surface-based registration suggests that the location of V1 is more accurately predicted by the cortical folds than by the shape of the brain embedded in the volume of the skull. In addition, the high quality of this atlas provides direct evidence that surface-based intersubject registration methods are superior to volume-based methods at superimposing functional areas of cortex, and therefore are better
Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R
2016-01-25
Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in <1h compared to >3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required. PMID:26708965
Unilateral Prostate Cancer Cannot be Accurately Predicted in Low-Risk Patients
Isbarn, Hendrik; Karakiewicz, Pierre I.; Vogel, Susanne
2010-07-01
Purpose: Hemiablative therapy (HAT) is increasing in popularity for treatment of patients with low-risk prostate cancer (PCa). The validity of this therapeutic modality, which exclusively treats PCa within a single prostate lobe, rests on accurate staging. We tested the accuracy of unilaterally unremarkable biopsy findings in cases of low-risk PCa patients who are potential candidates for HAT. Methods and Materials: The study population consisted of 243 men with clinical stage {<=}T2a, a prostate-specific antigen (PSA) concentration of <10 ng/ml, a biopsy-proven Gleason sum of {<=}6, and a maximum of 2 ipsilateral positive biopsy results out of 10 or more cores. All men underwent a radical prostatectomy, and pathology stage was used as the gold standard. Univariable and multivariable logistic regression models were tested for significant predictors of unilateral, organ-confined PCa. These predictors consisted of PSA, %fPSA (defined as the quotient of free [uncomplexed] PSA divided by the total PSA), clinical stage (T2a vs. T1c), gland volume, and number of positive biopsy cores (2 vs. 1). Results: Despite unilateral stage at biopsy, bilateral or even non-organ-confined PCa was reported in 64% of all patients. In multivariable analyses, no variable could clearly and independently predict the presence of unilateral PCa. This was reflected in an overall accuracy of 58% (95% confidence interval, 50.6-65.8%). Conclusions: Two-thirds of patients with unilateral low-risk PCa, confirmed by clinical stage and biopsy findings, have bilateral or non-organ-confined PCa at radical prostatectomy. This alarming finding questions the safety and validity of HAT.
A numerical procedure for predicting creep and delayed failures in laminated composites
NASA Technical Reports Server (NTRS)
Dillard, D. A.; Brinson, H. F.
1983-01-01
A numerical procedure is described for predicting the viscoelastic response of general laminates. A nonlinear compliance model is used to predict the creep response of the individual laminae. A biaxial delayed failure model predicts ply failure. The numerical procedure, based on lamination theory, increases by increments through time to predict creep compliance and delayed failures in laminates. Numerical stability problems and experimental verification are discussed. Although the program has been quite successful in predicting creep of general laminates, the assumptions associated with lamination theory have resulted in erroneous bounds on the predicted material response. Delayed failure predictions have been conservative. Several improvements are suggested to increase the accuracy of the procedure.
NASA Astrophysics Data System (ADS)
Wagenbrenner, Natalie S.; Forthofer, Jason M.; Lamb, Brian K.; Shannon, Kyle S.; Butler, Bret W.
2016-04-01
Wind predictions in complex terrain are important for a number of applications. Dynamic downscaling of numerical weather prediction (NWP) model winds with a high-resolution wind model is one way to obtain a wind forecast that accounts for local terrain effects, such as wind speed-up over ridges, flow channeling in valleys, flow separation around terrain obstacles, and flows induced by local surface heating and cooling. In this paper we investigate the ability of a mass-consistent wind model for downscaling near-surface wind predictions from four NWP models in complex terrain. Model predictions are compared with surface observations from a tall, isolated mountain. Downscaling improved near-surface wind forecasts under high-wind (near-neutral atmospheric stability) conditions. Results were mixed during upslope and downslope (non-neutral atmospheric stability) flow periods, although wind direction predictions generally improved with downscaling. This work constitutes evaluation of a diagnostic wind model at unprecedented high spatial resolution in terrain with topographical ruggedness approaching that of typical landscapes in the western US susceptible to wildland fire.
Numerical simulation and prediction of coastal ocean circulation
Chen, P.
1992-01-01
Numerical simulation and prediction of coastal ocean circulation have been conducted in three cases. 1. A process-oriented modeling study is conducted to study the interaction of a western boundary current (WBC) with coastal water, and its responses to upstream topographic irregularities. It is hypothesized that the interaction of propagating WBC frontal waves and topographic Rossby waves are responsible for upstream variability. 2. A simulation of meanders and eddies in the Norwegian Coastal Current (NCC) for February and March of 1988 is conducted with a newly developed nested dynamic interactive model. The model employs a coarse-grid, large domain to account for non-local forcing and a fine-grid nested domain to resolve meanders and eddies. The model is forced by wind stresses, heat fluxes and atmospheric pressure corresponding Feb/March of 1988, and accounts for river/fjord discharges, open ocean inflow and outflow, and M[sub 2] tides. The simulation reproduced fairly well the observed circulation, tides, and salinity features in the North Sea, Norwegian Trench and NCC region in the large domain and fairly realistic meanders and eddies in the NCC in the nested region. 3. A methodology for practical coastal ocean hindcast/forecast is developed, taking advantage of the disparate time scales of various forcing and considering wind to be the dominant factor in affecting density fluctuation in the time scale of 1 to 10 days. The density field obtained from a prognostic simulation is analyzed by the empirical orthogonal function method (EOF), and correlated with the wind; these information are then used to drive a circulation model which excludes the density calculation. The method is applied to hindcast the circulation in the New York Bight for spring and summer season of 1988. The hindcast fields compare favorably with the results obtained from the prognostic circulation model.
Numerical Prediction of Laminar Instability Noise for NACA 0012 Aerofoil
NASA Astrophysics Data System (ADS)
De Gennaro, Michele; Hueppe, Andreas; Kuehnelt, Helmut; Kaltenbacher, Manfred
2011-09-01
Aerofoil self-generated noise is recognized to be of fundamental importance in the frame of applied aeroacoustics and the use of computational methods to assess the acoustic behaviour of airframe components challenges an even larger community of engineers and scientists. Several noise generation mechanisms can be found which are mainly related to the physical development of turbulence over the boundary layer. They can be classified in 3 main categories: the Turbulent Boundary Layer—Trailing Edge noise (TBL-TE), the Laminar Boundary Layer—Vortex Shedding (LBL-VS) noise and the Separation Stall (S-S) noise. The TBL-TE is mainly related to the noise generated by turbulent eddies which develop into the boundary layer and usually exhibits a broadband spectrum. The LBL-VS is related to laminar instabilities that can occur within the boundary layer which are responsible for a very late transition and generate a typical peaked tonal noise, while the S-S noise mainly results from the development of large vortices after the separation point. In this paper we propose a numerical analysis targeted to the simulation the LBL-VS noise mechanisms on a NACA 0012 aerofoil, tested at a Reynolds number of 1.1 M and Mach number of 0.2. The aerodynamic simulation is performed with a 2D transient RANS approach using the k-ω transitional turbulence model, while the acoustic computations are performed with the FfowcsWilliams-Hawkings (FW-H) acoustic analogy and with a Finite Element (FE) approach solving Lighthill's wave equation. Computed noise spectra are compared with experimental data published by NASA showing a good agreement both for peak location as well as for the predicted noise level.
Numerical Weather Predictions Evaluation Using Spatial Verification Methods
NASA Astrophysics Data System (ADS)
Tegoulias, I.; Pytharoulis, I.; Kotsopoulos, S.; Kartsios, S.; Bampzelis, D.; Karacostas, T.
2014-12-01
During the last years high-resolution numerical weather prediction simulations have been used to examine meteorological events with increased convective activity. Traditional verification methods do not provide the desired level of information to evaluate those high-resolution simulations. To assess those limitations new spatial verification methods have been proposed. In the present study an attempt is made to estimate the ability of the WRF model (WRF -ARW ver3.5.1) to reproduce selected days with high convective activity during the year 2010 using those feature-based verification methods. Three model domains, covering Europe, the Mediterranean Sea and northern Africa (d01), the wider area of Greece (d02) and central Greece - Thessaly region (d03) are used at horizontal grid-spacings of 15km, 5km and 1km respectively. By alternating microphysics (Ferrier, WSM6, Goddard), boundary layer (YSU, MYJ) and cumulus convection (Kain--Fritsch, BMJ) schemes, a set of twelve model setups is obtained. The results of those simulations are evaluated against data obtained using a C-Band (5cm) radar located at the centre of the innermost domain. Spatial characteristics are well captured but with a variable time lag between simulation results and radar data. Acknowledgements: This research is cofinanced by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007--2013).
NASA Technical Reports Server (NTRS)
Duque, Earl P. N.; Johnson, Wayne; vanDam, C. P.; Chao, David D.; Cortes, Regina; Yee, Karen
1999-01-01
Accurate, reliable and robust numerical predictions of wind turbine rotor power remain a challenge to the wind energy industry. The literature reports various methods that compare predictions to experiments. The methods vary from Blade Element Momentum Theory (BEM), Vortex Lattice (VL), to variants of Reynolds-averaged Navier-Stokes (RaNS). The BEM and VL methods consistently show discrepancies in predicting rotor power at higher wind speeds mainly due to inadequacies with inboard stall and stall delay models. The RaNS methodologies show promise in predicting blade stall. However, inaccurate rotor vortex wake convection, boundary layer turbulence modeling and grid resolution has limited their accuracy. In addition, the inherently unsteady stalled flow conditions become computationally expensive for even the best endowed research labs. Although numerical power predictions have been compared to experiment. The availability of good wind turbine data sufficient for code validation experimental data that has been extracted from the IEA Annex XIV download site for the NREL Combined Experiment phase II and phase IV rotor. In addition, the comparisons will show data that has been further reduced into steady wind and zero yaw conditions suitable for comparisons to "steady wind" rotor power predictions. In summary, the paper will present and discuss the capabilities and limitations of the three numerical methods and make available a database of experimental data suitable to help other numerical methods practitioners validate their own work.
Margot Gerritsen
2008-10-31
Gas-injection processes are widely and increasingly used for enhanced oil recovery (EOR). In the United States, for example, EOR production by gas injection accounts for approximately 45% of total EOR production and has tripled since 1986. The understanding of the multiphase, multicomponent flow taking place in any displacement process is essential for successful design of gas-injection projects. Due to complex reservoir geometry, reservoir fluid properties and phase behavior, the design of accurate and efficient numerical simulations for the multiphase, multicomponent flow governing these processes is nontrivial. In this work, we developed, implemented and tested a streamline based solver for gas injection processes that is computationally very attractive: as compared to traditional Eulerian solvers in use by industry it computes solutions with a computational speed orders of magnitude higher and a comparable accuracy provided that cross-flow effects do not dominate. We contributed to the development of compositional streamline solvers in three significant ways: improvement of the overall framework allowing improved streamline coverage and partial streamline tracing, amongst others; parallelization of the streamline code, which significantly improves wall clock time; and development of new compositional solvers that can be implemented along streamlines as well as in existing Eulerian codes used by industry. We designed several novel ideas in the streamline framework. First, we developed an adaptive streamline coverage algorithm. Adding streamlines locally can reduce computational costs by concentrating computational efforts where needed, and reduce mapping errors. Adapting streamline coverage effectively controls mass balance errors that mostly result from the mapping from streamlines to pressure grid. We also introduced the concept of partial streamlines: streamlines that do not necessarily start and/or end at wells. This allows more efficient coverage and avoids
NASA Astrophysics Data System (ADS)
Rong, Y. M.; Chang, Y.; Huang, Y.; Zhang, G. J.; Shao, X. Y.
2015-12-01
There are few researches that concentrate on the prediction of the bead geometry for laser brazing with crimping butt. This paper addressed the accurate prediction of the bead profile by developing a generalized regression neural network (GRNN) algorithm. Firstly GRNN model was developed and trained to decrease the prediction error that may be influenced by the sample size. Then the prediction accuracy was demonstrated by comparing with other articles and back propagation artificial neural network (BPNN) algorithm. Eventually the reliability and stability of GRNN model were discussed from the points of average relative error (ARE), mean square error (MSE) and root mean square error (RMSE), while the maximum ARE and MSE were 6.94% and 0.0303 that were clearly less than those (14.28% and 0.0832) predicted by BPNN. Obviously, it was proved that the prediction accuracy was improved at least 2 times, and the stability was also increased much more.
On the assimilation of satellite sounder data in cloudy skies in numerical weather prediction models
NASA Astrophysics Data System (ADS)
Li, Jun; Wang, Pei; Han, Hyojin; Li, Jinlong; Zheng, Jing
2016-04-01
Satellite measurements are an important source of global observations in support of numerical weather prediction (NWP). The assimilation of satellite radiances under clear skies has greatly improved NWP forecast scores. However, the application of radiances in cloudy skies remains a significant challenge. In order to better assimilate radiances in cloudy skies, it is very important to detect any clear field-of-view (FOV) accurately and assimilate cloudy radiances appropriately. Research progress on both clear FOV detection methodologies and cloudy radiance assimilation techniques are reviewed in this paper. Overview on approaches being implemented in the operational centers and studied by the satellite data assimilation research community is presented. Challenges and future directions for satellite sounder radiance assimilation in cloudy skies in NWP models are also discussed.
Numerical Prediction Of Elastic Springback In An Automotive Complex Structural Part
NASA Astrophysics Data System (ADS)
Fratini, Livan; Ingarao, Giuseppe; Micari, Fabrizio; Lo Franco, Andrea
2007-05-01
The routing and production of 3D complex parts for automotive applications is characterized by springback phenomena affecting the final geometry of the components both after the stamping operations and the trimming ones. FE analyses have to assure effectiveness and consistency in order to be utilized as design tool to be coupled to proper compensating techniques allowing to obtain the desired geometry at the and of the production sequence. In the present paper the full routing of a DP 600 steel automotive structural part is considered and the springback phenomena occurring after forming and trimming are investigated through FE analyses utilizing two different commercial codes. Althought finite element analysis is successful in simulating industrial sheet forming operations, the accurate and reliable applications of this phenomenon and its numerical prediction has not been widely demonstrated. In this paper the influence of the main numerical parameters has been considered i.e. type of the utilized shell element and number of integration points along the thickness, with the aim to improve the effectiveness and reliability of the numerical results. The obtained results have been compared with the experimental evidences derived from CMM acquisitions.
Braun, Tatjana; Koehler Leman, Julia; Lange, Oliver F.
2015-01-01
Recent work has shown that the accuracy of ab initio structure prediction can be significantly improved by integrating evolutionary information in form of intra-protein residue-residue contacts. Following this seminal result, much effort is put into the improvement of contact predictions. However, there is also a substantial need to develop structure prediction protocols tailored to the type of restraints gained by contact predictions. Here, we present a structure prediction protocol that combines evolutionary information with the resolution-adapted structural recombination approach of Rosetta, called RASREC. Compared to the classic Rosetta ab initio protocol, RASREC achieves improved sampling, better convergence and higher robustness against incorrect distance restraints, making it the ideal sampling strategy for the stated problem. To demonstrate the accuracy of our protocol, we tested the approach on a diverse set of 28 globular proteins. Our method is able to converge for 26 out of the 28 targets and improves the average TM-score of the entire benchmark set from 0.55 to 0.72 when compared to the top ranked models obtained by the EVFold web server using identical contact predictions. Using a smaller benchmark, we furthermore show that the prediction accuracy of our method is only slightly reduced when the contact prediction accuracy is comparatively low. This observation is of special interest for protein sequences that only have a limited number of homologs. PMID:26713437
Accurate microRNA target prediction correlates with protein repression levels
Maragkakis, Manolis; Alexiou, Panagiotis; Papadopoulos, Giorgio L; Reczko, Martin; Dalamagas, Theodore; Giannopoulos, George; Goumas, George; Koukis, Evangelos; Kourtis, Kornilios; Simossis, Victor A; Sethupathy, Praveen; Vergoulis, Thanasis; Koziris, Nectarios; Sellis, Timos; Tsanakas, Panagiotis; Hatzigeorgiou, Artemis G
2009-01-01
Background MicroRNAs are small endogenously expressed non-coding RNA molecules that regulate target gene expression through translation repression or messenger RNA degradation. MicroRNA regulation is performed through pairing of the microRNA to sites in the messenger RNA of protein coding genes. Since experimental identification of miRNA target genes poses difficulties, computational microRNA target prediction is one of the key means in deciphering the role of microRNAs in development and disease. Results DIANA-microT 3.0 is an algorithm for microRNA target prediction which is based on several parameters calculated individually for each microRNA and combines conserved and non-conserved microRNA recognition elements into a final prediction score, which correlates with protein production fold change. Specifically, for each predicted interaction the program reports a signal to noise ratio and a precision score which can be used as an indication of the false positive rate of the prediction. Conclusion Recently, several computational target prediction programs were benchmarked based on a set of microRNA target genes identified by the pSILAC method. In this assessment DIANA-microT 3.0 was found to achieve the highest precision among the most widely used microRNA target prediction programs reaching approximately 66%. The DIANA-microT 3.0 prediction results are available online in a user friendly web server at PMID:19765283
Draxl, C.; Churchfield, M.; Mirocha, J.; Lee, S.; Lundquist, J.; Michalakes, J.; Moriarty, P.; Purkayastha, A.; Sprague, M.; Vanderwende, B.
2014-06-01
Wind plant aerodynamics are influenced by a combination of microscale and mesoscale phenomena. Incorporating mesoscale atmospheric forcing (e.g., diurnal cycles and frontal passages) into wind plant simulations can lead to a more accurate representation of microscale flows, aerodynamics, and wind turbine/plant performance. Our goal is to couple a numerical weather prediction model that can represent mesoscale flow [specifically the Weather Research and Forecasting model] with a microscale LES model (OpenFOAM) that can predict microscale turbulence and wake losses.
A machine learning approach to the accurate prediction of multi-leaf collimator positional errors
NASA Astrophysics Data System (ADS)
Carlson, Joel N. K.; Park, Jong Min; Park, So-Yeon; In Park, Jong; Choi, Yunseok; Ye, Sung-Joon
2016-03-01
Discrepancies between planned and delivered movements of multi-leaf collimators (MLCs) are an important source of errors in dose distributions during radiotherapy. In this work we used machine learning techniques to train models to predict these discrepancies, assessed the accuracy of the model predictions, and examined the impact these errors have on quality assurance (QA) procedures and dosimetry. Predictive leaf motion parameters for the models were calculated from the plan files, such as leaf position and velocity, whether the leaf was moving towards or away from the isocenter of the MLC, and many others. Differences in positions between synchronized DICOM-RT planning files and DynaLog files reported during QA delivery were used as a target response for training of the models. The final model is capable of predicting MLC positions during delivery to a high degree of accuracy. For moving MLC leaves, predicted positions were shown to be significantly closer to delivered positions than were planned positions. By incorporating predicted positions into dose calculations in the TPS, increases were shown in gamma passing rates against measured dose distributions recorded during QA delivery. For instance, head and neck plans with 1%/2 mm gamma criteria had an average increase in passing rate of 4.17% (SD = 1.54%). This indicates that the inclusion of predictions during dose calculation leads to a more realistic representation of plan delivery. To assess impact on the patient, dose volumetric histograms (DVH) using delivered positions were calculated for comparison with planned and predicted DVHs. In all cases, predicted dose volumetric parameters were in closer agreement to the delivered parameters than were the planned parameters, particularly for organs at risk on the periphery of the treatment area. By incorporating the predicted positions into the TPS, the treatment planner is given a more realistic view of the dose distribution as it will truly be
A machine learning approach to the accurate prediction of multi-leaf collimator positional errors.
Carlson, Joel N K; Park, Jong Min; Park, So-Yeon; Park, Jong In; Choi, Yunseok; Ye, Sung-Joon
2016-03-21
Discrepancies between planned and delivered movements of multi-leaf collimators (MLCs) are an important source of errors in dose distributions during radiotherapy. In this work we used machine learning techniques to train models to predict these discrepancies, assessed the accuracy of the model predictions, and examined the impact these errors have on quality assurance (QA) procedures and dosimetry. Predictive leaf motion parameters for the models were calculated from the plan files, such as leaf position and velocity, whether the leaf was moving towards or away from the isocenter of the MLC, and many others. Differences in positions between synchronized DICOM-RT planning files and DynaLog files reported during QA delivery were used as a target response for training of the models. The final model is capable of predicting MLC positions during delivery to a high degree of accuracy. For moving MLC leaves, predicted positions were shown to be significantly closer to delivered positions than were planned positions. By incorporating predicted positions into dose calculations in the TPS, increases were shown in gamma passing rates against measured dose distributions recorded during QA delivery. For instance, head and neck plans with 1%/2 mm gamma criteria had an average increase in passing rate of 4.17% (SD = 1.54%). This indicates that the inclusion of predictions during dose calculation leads to a more realistic representation of plan delivery. To assess impact on the patient, dose volumetric histograms (DVH) using delivered positions were calculated for comparison with planned and predicted DVHs. In all cases, predicted dose volumetric parameters were in closer agreement to the delivered parameters than were the planned parameters, particularly for organs at risk on the periphery of the treatment area. By incorporating the predicted positions into the TPS, the treatment planner is given a more realistic view of the dose distribution as it will truly be
Grassi, Lorenzo; Väänänen, Sami P; Ristinmaa, Matti; Jurvelin, Jukka S; Isaksson, Hanna
2016-03-21
Subject-specific finite element models have been proposed as a tool to improve fracture risk assessment in individuals. A thorough laboratory validation against experimental data is required before introducing such models in clinical practice. Results from digital image correlation can provide full-field strain distribution over the specimen surface during in vitro test, instead of at a few pre-defined locations as with strain gauges. The aim of this study was to validate finite element models of human femora against experimental data from three cadaver femora, both in terms of femoral strength and of the full-field strain distribution collected with digital image correlation. The results showed a high accuracy between predicted and measured principal strains (R(2)=0.93, RMSE=10%, 1600 validated data points per specimen). Femoral strength was predicted using a rate dependent material model with specific strain limit values for yield and failure. This provided an accurate prediction (<2% error) for two out of three specimens. In the third specimen, an accidental change in the boundary conditions occurred during the experiment, which compromised the femoral strength validation. The achieved strain accuracy was comparable to that obtained in state-of-the-art studies which validated their prediction accuracy against 10-16 strain gauge measurements. Fracture force was accurately predicted, with the predicted failure location being very close to the experimental fracture rim. Despite the low sample size and the single loading condition tested, the present combined numerical-experimental method showed that finite element models can predict femoral strength by providing a thorough description of the local bone mechanical response. PMID:26944687
Zimmermann, Olav; Hansmann, Ulrich H E
2008-09-01
Constraint generation for 3d structure prediction and structure-based database searches benefit from fine-grained prediction of local structure. In this work, we present LOCUSTRA, a novel scheme for the multiclass prediction of local structure that uses two layers of support vector machines (SVM). Using a 16-letter structural alphabet from de Brevern et al. (Proteins: Struct., Funct., Bioinf. 2000, 41, 271-287), we assess its prediction ability for an independent test set of 222 proteins and compare our method to three-class secondary structure prediction and direct prediction of dihedral angles. The prediction accuracy is Q16=61.0% for the 16 classes of the structural alphabet and Q3=79.2% for a simple mapping to the three secondary classes helix, sheet, and coil. We achieve a mean phi(psi) error of 24.74 degrees (38.35 degrees) and a median RMSDA (root-mean-square deviation of the (dihedral) angles) per protein chain of 52.1 degrees. These results compare favorably with related approaches. The LOCUSTRA web server is freely available to researchers at http://www.fz-juelich.de/nic/cbb/service/service.php. PMID:18763837
A numerical prediction of the precipitation and hydrology of California
Kim, J.; Miller, N.; Soong, S.T.; Rhea, O.
1994-08-01
A five day simulation of the precipitation over the southwestern United States using a RNWHPS is presented. The MAS model accurately simulates the observed local precipitation, even though extreme values are somewhat underestimated. The precipitation at individual watersheds clearly indicates that the timing of local precipitation depends on the location of each watershed and the direction of the storm path.
Sensor Data Fusion for Accurate Cloud Presence Prediction Using Dempster-Shafer Evidence Theory
Li, Jiaming; Luo, Suhuai; Jin, Jesse S.
2010-01-01
Sensor data fusion technology can be used to best extract useful information from multiple sensor observations. It has been widely applied in various applications such as target tracking, surveillance, robot navigation, signal and image processing. This paper introduces a novel data fusion approach in a multiple radiation sensor environment using Dempster-Shafer evidence theory. The methodology is used to predict cloud presence based on the inputs of radiation sensors. Different radiation data have been used for the cloud prediction. The potential application areas of the algorithm include renewable power for virtual power station where the prediction of cloud presence is the most challenging issue for its photovoltaic output. The algorithm is validated by comparing the predicted cloud presence with the corresponding sunshine occurrence data that were recorded as the benchmark. Our experiments have indicated that comparing to the approaches using individual sensors, the proposed data fusion approach can increase correct rate of cloud prediction by ten percent, and decrease unknown rate of cloud prediction by twenty three percent. PMID:22163414
Numerical Weather Prediction Over Caucasus Region With Nested Grid Models
NASA Astrophysics Data System (ADS)
Davitashvili, Dr.; Kutaladze, Dr.; Kvatadze, Dr.
2010-09-01
territory of Georgia. Both use the default 31 vertical levels. We have studied the effect of thermal and advective-dynamic factors of atmosphere on the changes of the West Georgian climate. We have shown that non-proportional warming of the Black Sea and Colkhi lowland provokes the intensive strengthening of circulation. Some results of calculations of the interaction of airflow with complex orography of Caucasus with horizontal grid-point resolutions of 15 and 5 km are presented. Also with the purpose of study behavior of nested grid method above complex terrain we have elaborated in sigma coordinate system short term prediction regional numerical model for Caucasus region. The results of computation carried out with one directional, two directional and new combined methods are given.
Forecasting irrigation demand by assimilating satellite images and numerical weather predictions
NASA Astrophysics Data System (ADS)
Pelosi, Anna; Medina, Hanoi; Villani, Paolo; Falanga Bolognesi, Salvatore; D'Urso, Guido; Battista Chirico, Giovanni
2016-04-01
Forecasting irrigation water demand, with small predictive uncertainty in the short-medium term, is fundamental for an efficient planning of water resource allocation among multiple users and for decreasing water and energy consumptions. In this study we present an innovative system for forecasting irrigation water demand, applicable at different spatial scales: from the farm level to the irrigation district level. The forecast system is centred on a crop growth model assimilating data from satellite images and numerical weather forecasts, according to a stochastic ensemble-based approach. Different sources of uncertainty affecting model predictions are represented by an ensemble of model trajectories, each generated by a possible realization of the model components (model parameters, input weather data and model state variables). The crop growth model is based on a set of simplified analytical relations, with the aim to assess biomass, leaf area index (LAI) growth and evapotranspiration rate with a daily time step. Within the crop growth model, LAI dynamics is let be governed by temperature and leaf dry matter supply, according to the development stage of the crop. The model assimilates LAI data retrieved from VIS-NIR high-resolution multispectral satellite images. Numerical weather model outputs are those from the European limited area ensemble prediction system (COSMO-LEPS), which provides forecasts up to five days with a spatial resolution of seven kilometres. Weather forecasts are sequentially bias corrected based on data from ground weather stations. The forecasting system is evaluated in experimental areas of southern Italy during three irrigation seasons. The performance analysis shows very accurate irrigation water demand forecasts, which make the proposed system a valuable support for water planning and saving at farm level as well as for water management at larger spatial scales.
Session on techniques and resources for storm-scale numerical weather prediction
NASA Technical Reports Server (NTRS)
Droegemeier, Kelvin
1993-01-01
The session on techniques and resources for storm-scale numerical weather prediction are reviewed. The recommendations of this group are broken down into three area: modeling and prediction, data requirements in support of modeling and prediction, and data management. The current status, modeling and technological recommendations, data requirements in support of modeling and prediction, and data management are addressed.
DISPLAR: an accurate method for predicting DNA-binding sites on protein surfaces
Tjong, Harianto; Zhou, Huan-Xiang
2007-01-01
Structural and physical properties of DNA provide important constraints on the binding sites formed on surfaces of DNA-targeting proteins. Characteristics of such binding sites may form the basis for predicting DNA-binding sites from the structures of proteins alone. Such an approach has been successfully developed for predicting protein–protein interface. Here this approach is adapted for predicting DNA-binding sites. We used a representative set of 264 protein–DNA complexes from the Protein Data Bank to analyze characteristics and to train and test a neural network predictor of DNA-binding sites. The input to the predictor consisted of PSI-blast sequence profiles and solvent accessibilities of each surface residue and 14 of its closest neighboring residues. Predicted DNA-contacting residues cover 60% of actual DNA-contacting residues and have an accuracy of 76%. This method significantly outperforms previous attempts of DNA-binding site predictions. Its application to the prion protein yielded a DNA-binding site that is consistent with recent NMR chemical shift perturbation data, suggesting that it can complement experimental techniques in characterizing protein–DNA interfaces. PMID:17284455
Hall, Barry G; Cardenas, Heliodoro; Barlow, Miriam
2013-01-01
In clinical settings it is often important to know not just the identity of a microorganism, but also the danger posed by that particular strain. For instance, Escherichia coli can range from being a harmless commensal to being a very dangerous enterohemorrhagic (EHEC) strain. Determining pathogenic phenotypes can be both time consuming and expensive. Here we propose a simple, rapid, and inexpensive method of predicting pathogenic phenotypes on the basis of the presence or absence of short homologous DNA segments in an isolate. Our method compares completely sequenced genomes without the necessity of genome alignments in order to identify the presence or absence of the segments to produce an automatic alignment of the binary string that describes each genome. Analysis of the segment alignment allows identification of those segments whose presence strongly predicts a phenotype. Clinical application of the method requires nothing more that PCR amplification of each of the set of predictive segments. Here we apply the method to identifying EHEC strains of E. coli and to distinguishing E. coli from Shigella. We show in silico that with as few as 8 predictive sequences, if even three of those predictive sequences are amplified the probability of being EHEC or Shigella is >0.99. The method is thus very robust to the occasional amplification failure for spurious reasons. Experimentally, we apply the method to screening a set of 98 isolates to distinguishing E. coli from Shigella, and EHEC from non-EHEC E. coli strains and show that all isolates are correctly identified. PMID:23935901
NASA Astrophysics Data System (ADS)
Dale, Andy; Stolpovsky, Konstantin; Wallmann, Klaus
2016-04-01
The recycling and burial of biogenic material in the sea floor plays a key role in the regulation of ocean chemistry. Proper consideration of these processes in ocean biogeochemical models is becoming increasingly recognized as an important step in model validation and prediction. However, the rate of organic matter remineralization in sediments and the benthic flux of redox-sensitive elements are difficult to predict a priori. In this communication, examples of empirical benthic flux models that can be coupled to earth system models to predict sediment-water exchange in the open ocean are presented. Large uncertainties hindering further progress in this field include knowledge of the reactivity of organic carbon reaching the sediment, the importance of episodic variability in bottom water chemistry and particle rain rates (for both the deep-sea and margins) and the role of benthic fauna. How do we meet the challenge?
Koot, Yvonne E. M.; van Hooff, Sander R.; Boomsma, Carolien M.; van Leenen, Dik; Groot Koerkamp, Marian J. A.; Goddijn, Mariëtte; Eijkemans, Marinus J. C.; Fauser, Bart C. J. M.; Holstege, Frank C. P.; Macklon, Nick S.
2016-01-01
The primary limiting factor for effective IVF treatment is successful embryo implantation. Recurrent implantation failure (RIF) is a condition whereby couples fail to achieve pregnancy despite consecutive embryo transfers. Here we describe the collection of gene expression profiles from mid-luteal phase endometrial biopsies (n = 115) from women experiencing RIF and healthy controls. Using a signature discovery set (n = 81) we identify a signature containing 303 genes predictive of RIF. Independent validation in 34 samples shows that the gene signature predicts RIF with 100% positive predictive value (PPV). The strength of the RIF associated expression signature also stratifies RIF patients into distinct groups with different subsequent implantation success rates. Exploration of the expression changes suggests that RIF is primarily associated with reduced cellular proliferation. The gene signature will be of value in counselling and guiding further treatment of women who fail to conceive upon IVF and suggests new avenues for developing intervention. PMID:26797113
Accurate and inexpensive prediction of the color optical properties of anthocyanins in solution.
Ge, Xiaochuan; Timrov, Iurii; Binnie, Simon; Biancardi, Alessandro; Calzolari, Arrigo; Baroni, Stefano
2015-04-23
The simulation of the color optical properties of molecular dyes in liquid solution requires the calculation of time evolution of the solute absorption spectra fluctuating in the solvent at finite temperature. Time-averaged spectra can be directly evaluated by combining ab initio Car-Parrinello molecular dynamics and time-dependent density functional theory calculations. The inclusion of hybrid exchange-correlation functionals, necessary for the prediction of the correct transition frequencies, prevents one from using these techniques for the simulation of the optical properties of large realistic systems. Here we present an alternative approach for the prediction of the color of natural dyes in solution with a low computational cost. We applied this approach to representative anthocyanin dyes: the excellent agreement between the simulated and the experimental colors makes this method a straightforward and inexpensive tool for the high-throughput prediction of colors of molecules in liquid solvents. PMID:25830823
NASA Technical Reports Server (NTRS)
Schonberg, William P.; Peck, Jeffrey A.
1992-01-01
Over the last three decades, multiwall structures have been analyzed extensively, primarily through experiment, as a means of increasing the protection afforded to spacecraft structure. However, as structural configurations become more varied, the number of tests required to characterize their response increases dramatically. As an alternative, numerical modeling of high-speed impact phenomena is often being used to predict the response of a variety of structural systems under impact loading conditions. This paper presents the results of a preliminary numerical/experimental investigation of the hypervelocity impact response of multiwall structures. The results of experimental high-speed impact tests are compared against the predictions of the HULL hydrodynamic computer code. It is shown that the hypervelocity impact response characteristics of a specific system cannot be accurately predicted from a limited number of HULL code impact simulations. However, if a wide range of impact loadings conditions are considered, then the ballistic limit curve of the system based on the entire series of numerical simulations can be used as a relatively accurate indication of actual system response.
Victora, Andrea; Möller, Heiko M.; Exner, Thomas E.
2014-01-01
NMR chemical shift predictions based on empirical methods are nowadays indispensable tools during resonance assignment and 3D structure calculation of proteins. However, owing to the very limited statistical data basis, such methods are still in their infancy in the field of nucleic acids, especially when non-canonical structures and nucleic acid complexes are considered. Here, we present an ab initio approach for predicting proton chemical shifts of arbitrary nucleic acid structures based on state-of-the-art fragment-based quantum chemical calculations. We tested our prediction method on a diverse set of nucleic acid structures including double-stranded DNA, hairpins, DNA/protein complexes and chemically-modified DNA. Overall, our quantum chemical calculations yield highly/very accurate predictions with mean absolute deviations of 0.3–0.6 ppm and correlation coefficients (r2) usually above 0.9. This will allow for identifying misassignments and validating 3D structures. Furthermore, our calculations reveal that chemical shifts of protons involved in hydrogen bonding are predicted significantly less accurately. This is in part caused by insufficient inclusion of solvation effects. However, it also points toward shortcomings of current force fields used for structure determination of nucleic acids. Our quantum chemical calculations could therefore provide input for force field optimization. PMID:25404135
Victora, Andrea; Möller, Heiko M; Exner, Thomas E
2014-12-16
NMR chemical shift predictions based on empirical methods are nowadays indispensable tools during resonance assignment and 3D structure calculation of proteins. However, owing to the very limited statistical data basis, such methods are still in their infancy in the field of nucleic acids, especially when non-canonical structures and nucleic acid complexes are considered. Here, we present an ab initio approach for predicting proton chemical shifts of arbitrary nucleic acid structures based on state-of-the-art fragment-based quantum chemical calculations. We tested our prediction method on a diverse set of nucleic acid structures including double-stranded DNA, hairpins, DNA/protein complexes and chemically-modified DNA. Overall, our quantum chemical calculations yield highly/very accurate predictions with mean absolute deviations of 0.3-0.6 ppm and correlation coefficients (r(2)) usually above 0.9. This will allow for identifying misassignments and validating 3D structures. Furthermore, our calculations reveal that chemical shifts of protons involved in hydrogen bonding are predicted significantly less accurately. This is in part caused by insufficient inclusion of solvation effects. However, it also points toward shortcomings of current force fields used for structure determination of nucleic acids. Our quantum chemical calculations could therefore provide input for force field optimization. PMID:25404135
A survey of numerical models for wind prediction
NASA Technical Reports Server (NTRS)
Schonfeld, D.
1980-01-01
A literature review is presented of the work done in the numerical modeling of wind flows. Pertinent computational techniques are described, as well as the necessary assumptions used to simplify the governing equations. A steady state model is outlined, based on the data obtained at the Deep Space Communications complex at Goldstone, California.
Simple intrinsic defects in InAs : numerical predictions.
Schultz, Peter Andrew
2013-03-01
This Report presents numerical tables summarizing properties of intrinsic defects in indium arsenide, InAs, as computed by density functional theory using semi-local density functionals, intended for use as reference tables for a defect physics package in device models.
Onken, Michael D.; Worley, Lori A.; Tuscan, Meghan D.; Harbour, J. William
2010-01-01
Uveal (ocular) melanoma is an aggressive cancer that often forms undetectable micrometastases before diagnosis of the primary tumor. These micrometastases later multiply to generate metastatic tumors that are resistant to therapy and are uniformly fatal. We have previously identified a gene expression profile derived from the primary tumor that is extremely accurate for identifying patients at high risk of metastatic disease. Development of a practical clinically feasible platform for analyzing this expression profile would benefit high-risk patients through intensified metastatic surveillance, earlier intervention for metastasis, and stratification for entry into clinical trials of adjuvant therapy. Here, we migrate the expression profile from a hybridization-based microarray platform to a robust, clinically practical, PCR-based 15-gene assay comprising 12 discriminating genes and three endogenous control genes. We analyze the technical performance of the assay in a prospective study of 609 tumor samples, including 421 samples sent from distant locations. We show that the assay can be performed accurately on fine needle aspirate biopsy samples, even when the quantity of RNA is below detectable limits. Preliminary outcome data from the prospective study affirm the prognostic accuracy of the assay. This prognostic assay provides an important addition to the armamentarium for managing patients with uveal melanoma, and it provides a proof of principle for the development of similar assays for other cancers. PMID:20413675
Luo, Longqiang; Li, Dingfang; Zhang, Wen; Tu, Shikui; Zhu, Xiaopeng; Tian, Gang
2016-01-01
Background Piwi-interacting RNA (piRNA) is the largest class of small non-coding RNA molecules. The transposon-derived piRNA prediction can enrich the research contents of small ncRNAs as well as help to further understand generation mechanism of gamete. Methods In this paper, we attempt to differentiate transposon-derived piRNAs from non-piRNAs based on their sequential and physicochemical features by using machine learning methods. We explore six sequence-derived features, i.e. spectrum profile, mismatch profile, subsequence profile, position-specific scoring matrix, pseudo dinucleotide composition and local structure-sequence triplet elements, and systematically evaluate their performances for transposon-derived piRNA prediction. Finally, we consider two approaches: direct combination and ensemble learning to integrate useful features and achieve high-accuracy prediction models. Results We construct three datasets, covering three species: Human, Mouse and Drosophila, and evaluate the performances of prediction models by 10-fold cross validation. In the computational experiments, direct combination models achieve AUC of 0.917, 0.922 and 0.992 on Human, Mouse and Drosophila, respectively; ensemble learning models achieve AUC of 0.922, 0.926 and 0.994 on the three datasets. Conclusions Compared with other state-of-the-art methods, our methods can lead to better performances. In conclusion, the proposed methods are promising for the transposon-derived piRNA prediction. The source codes and datasets are available in S1 File. PMID:27074043
Comparison of Experimental Diagnostic Signals with Numerical Predictions
NASA Astrophysics Data System (ADS)
Comer, K.; Turnbull, A. D.
1997-11-01
A new code has been written to compare experimental diagnostic signals with those predicted by stability code output and experimental equilibrium diagnostic signals such as SXR, ECE, BSE, and reflectometry. Comparison of expected and actual diagnostic signals will help distinguish or identify modes by the signals they produce, and will also help validate stability codes. Predicted diagnostic signals are obtained by taking the total time derivative of S, the signal amplitude, and assuming steady state conditions so that the partial time derivative can be set to zero. Multiplying by delta-time (Dt) results in δ S = tilde\\underlineξ \\cdot \\underlinenablaS, where δ S is the predicted diagnostic signal, tilde\\underlineξ is the plasma displacement predicted by various equilibrium codes (such as GATO or MARS), and \\underlinenablaS is the gradient of the equilibrium diagnostic signal. \\underlinenablaS may be obtained from an experimental equilibrium signal amplitude profile, or from a functional dependence of the signal amplitude on equilibrium temperature and density. Comparisons of predicted and actual signals from linear ideal and resistive codes show reasonable agreement with the measured signals in some cases, but there are also some significant discrepancies.
Viewing men's faces does not lead to accurate predictions of trustworthiness
Efferson, Charles; Vogt, Sonja
2013-01-01
The evolution of cooperation requires some mechanism that reduces the risk of exploitation for cooperative individuals. Recent studies have shown that men with wide faces are anti-social, and they are perceived that way by others. This suggests that people could use facial width to identify anti-social men and thus limit the risk of exploitation. To see if people can make accurate inferences like this, we conducted a two-part experiment. First, males played a sequential social dilemma, and we took photographs of their faces. Second, raters then viewed these photographs and guessed how second movers behaved. Raters achieved significant accuracy by guessing that second movers exhibited reciprocal behaviour. Raters were not able to use the photographs to further improve accuracy. Indeed, some raters used the photographs to their detriment; they could have potentially achieved greater accuracy and earned more money by ignoring the photographs and assuming all second movers reciprocate. PMID:23308340
Accurate prediction of the ammonia probes of a variable proton-to-electron mass ratio
NASA Astrophysics Data System (ADS)
Owens, A.; Yurchenko, S. N.; Thiel, W.; Špirko, V.
2015-07-01
A comprehensive study of the mass sensitivity of the vibration-rotation-inversion transitions of 14NH3, 15NH3, 14ND3 and 15ND3 is carried out variationally using the TROVE approach. Variational calculations are robust and accurate, offering a new way to compute sensitivity coefficients. Particular attention is paid to the Δk = ±3 transitions between the accidentally coinciding rotation-inversion energy levels of the ν2 = 0+, 0-, 1+ and 1- states, and the inversion transitions in the ν4 = 1 state affected by the `giant' l-type doubling effect. These transitions exhibit highly anomalous sensitivities, thus appearing as promising probes of a possible cosmological variation of the proton-to-electron mass ratio μ. Moreover, a simultaneous comparison of the calculated sensitivities reveals a sizeable isotopic dependence which could aid an exclusive ammonia detection.
Accurate, conformation-dependent predictions of solvent effects on protein ionization constants
Barth, P.; Alber, T.; Harbury, P. B.
2007-01-01
Predicting how aqueous solvent modulates the conformational transitions and influences the pKa values that regulate the biological functions of biomolecules remains an unsolved challenge. To address this problem, we developed FDPB_MF, a rotamer repacking method that exhaustively samples side chain conformational space and rigorously calculates multibody protein–solvent interactions. FDPB_MF predicts the effects on pKa values of various solvent exposures, large ionic strength variations, strong energetic couplings, structural reorganizations and sequence mutations. The method achieves high accuracy, with root mean square deviations within 0.3 pH unit of the experimental values measured for turkey ovomucoid third domain, hen lysozyme, Bacillus circulans xylanase, and human and Escherichia coli thioredoxins. FDPB_MF provides a faithful, quantitative assessment of electrostatic interactions in biological macromolecules. PMID:17360348
NASA Astrophysics Data System (ADS)
Shi, W. D.; Zhang, G. J.; Zhang, D. S.
2013-12-01
The objective of this paper is to evaluate the predictive capability of three turbulence models for the simulation of unsteady cavitating flows around a 2D Clark-y hydrofoil. Three turbulence models were standard k-ε model, hybrid model of density correction model (DCM) and filter-based model (FBM) and an improved partially-averaged Navier-Stokes model (PANS) based on k-ε model. Using the above-mentioned turbulence models and a homogeneous cavitation model, the unsteady cloud cavitation flows around the hydrofoil were numerically simulated and the time evolutions of cavity shape and lift evolutions over time were obtained. The results with comparison to a tunnel experiment data show that the hybrid model and PANS model can accurately capture unsteady cavity shedding details, fluctuation frequency and amplitude of lift and drag. The k-ε model has a poor agreement with the real experimental visualizations and this is mainly attributed to an over prediction of the turbulent viscosity in the rear part of the cavity, which limits the reentrant jet fully reaching the leading edge. The adverse pressure gradient plays an important role in the progression of the reentrant jet. Both the shock wave generated by the collapse of the cloud cavity and the growth of attached sheet cavity contribute to the increase of adverse pressure gradient.
NASA Astrophysics Data System (ADS)
de Freitas, C. R.; Schmekal, A.
2003-04-01
The study examines condensation as a microclimate process. It focuses first on finding a reliable method for measuring condensation and then on testing a numerical model for predicting condensation rates. The study site is the Glowworm Cave, a heavily used tourist cave in New Zealand. Preservation of the cave and its management as a sustainable tourist resource are high priorities. Here, as in other caves, condensation in carbon dioxide enriched air can lead to corrosion of calcite features. Specially constructed electronic sensors for measuring on-going condensation, as well as evaporation of the condensate, are tested. Measurements of condensation made over a year are used to test a physical model of condensation in the cave defined as a function of the vapour gradient between the cave air and condensation surface and a convection transfer coefficient. The results show that the amount and rate of condensation can be accurately measured and predicted. Air exchange with the outside can increase or decrease condensation rates, but the results show that the convection transfer coefficient remains constant. Temporal patterns of condensation in the cave are identified, as well as factors that influence these. Short-term and longer-term temporal variations of condensation rates are observed and patterns explained. Seasonal changes are large, with higher condensation rates occurring in the warmer months and lower rates during the cooler months. It is shown that controlling air exchange between the cave and the outside can influence condensation. This and other aspects of cave management are discussed.
FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues.
El-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant
2016-01-01
A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein
FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues
EL-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant
2016-01-01
A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein
Accurate Fault Prediction of BlueGene/P RAS Logs Via Geometric Reduction
Jones, Terry R; Kirby, Michael; Ladd, Joshua S; Dreisigmeyer, David; Thompson, Joshua
2010-01-01
The authors are building two algorithms for fault prediction using raw system-log data. This work is preliminary, and has only been applied to a limited dataset, however the results seem promising. The conclusions are that: (1) obtaining useful data from RAS-logs is challenging; (2) extracting concentrated information improves efficiency and accuracy; and (3) function evaluation algorithms are fast and lend well to scaling.
Faraggi, Eshel; Zhou, Yaoqi; Kloczkowski, Andrzej
2014-01-01
We present a new approach for predicting the Accessible Surface Area (ASA) using a General Neural Network (GENN). The novelty of the new approach lies in not using residue mutation profiles generated by multiple sequence alignments as descriptive inputs. Instead we use solely sequential window information and global features such as single-residue and two-residue compositions of the chain. The resulting predictor is both highly more efficient than sequence alignment based predictors and of comparable accuracy to them. Introduction of the global inputs significantly helps achieve this comparable accuracy. The predictor, termed ASAquick, is tested on predicting the ASA of globular proteins and found to perform similarly well for so-called easy and hard cases indicating generalizability and possible usability for de-novo protein structure prediction. The source code and a Linux executables for GENN and ASAquick are available from Research and Information Systems at http://mamiris.com, from the SPARKS Lab at http://sparks-lab.org, and from the Battelle Center for Mathematical Medicine at http://mathmed.org. PMID:25204636
Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data.
Pagán, Josué; De Orbe, M Irene; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L; Mora, J Vivancos; Moya, José M; Ayala, José L
2015-01-01
Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103
Asmadi, Aldi; Neumann, Marcus A; Kendrick, John; Girard, Pascale; Perrin, Marc-Antoine; Leusen, Frank J J
2009-12-24
In the 2007 blind test of crystal structure prediction hosted by the Cambridge Crystallographic Data Centre (CCDC), a hybrid DFT/MM method correctly ranked each of the four experimental structures as having the lowest lattice energy of all the crystal structures predicted for each molecule. The work presented here further validates this hybrid method by optimizing the crystal structures (experimental and submitted) of the first three CCDC blind tests held in 1999, 2001, and 2004. Except for the crystal structures of compound IX, all structures were reminimized and ranked according to their lattice energies. The hybrid method computes the lattice energy of a crystal structure as the sum of the DFT total energy and a van der Waals (dispersion) energy correction. Considering all four blind tests, the crystal structure with the lowest lattice energy corresponds to the experimentally observed structure for 12 out of 14 molecules. Moreover, good geometrical agreement is observed between the structures determined by the hybrid method and those measured experimentally. In comparison with the correct submissions made by the blind test participants, all hybrid optimized crystal structures (apart from compound II) have the smallest calculated root mean squared deviations from the experimentally observed structures. It is predicted that a new polymorph of compound V exists under pressure. PMID:19950907
Faraggi, Eshel; Zhou, Yaoqi; Kloczkowski, Andrzej
2014-11-01
We present a new approach for predicting the Accessible Surface Area (ASA) using a General Neural Network (GENN). The novelty of the new approach lies in not using residue mutation profiles generated by multiple sequence alignments as descriptive inputs. Instead we use solely sequential window information and global features such as single-residue and two-residue compositions of the chain. The resulting predictor is both highly more efficient than sequence alignment-based predictors and of comparable accuracy to them. Introduction of the global inputs significantly helps achieve this comparable accuracy. The predictor, termed ASAquick, is tested on predicting the ASA of globular proteins and found to perform similarly well for so-called easy and hard cases indicating generalizability and possible usability for de-novo protein structure prediction. The source code and a Linux executables for GENN and ASAquick are available from Research and Information Systems at http://mamiris.com, from the SPARKS Lab at http://sparks-lab.org, and from the Battelle Center for Mathematical Medicine at http://mathmed.org. PMID:25204636
Accurate Prediction of Drug-Induced Liver Injury Using Stem Cell-Derived Populations
Szkolnicka, Dagmara; Farnworth, Sarah L.; Lucendo-Villarin, Baltasar; Storck, Christopher; Zhou, Wenli; Iredale, John P.; Flint, Oliver
2014-01-01
Despite major progress in the knowledge and management of human liver injury, there are millions of people suffering from chronic liver disease. Currently, the only cure for end-stage liver disease is orthotopic liver transplantation; however, this approach is severely limited by organ donation. Alternative approaches to restoring liver function have therefore been pursued, including the use of somatic and stem cell populations. Although such approaches are essential in developing scalable treatments, there is also an imperative to develop predictive human systems that more effectively study and/or prevent the onset of liver disease and decompensated organ function. We used a renewable human stem cell resource, from defined genetic backgrounds, and drove them through developmental intermediates to yield highly active, drug-inducible, and predictive human hepatocyte populations. Most importantly, stem cell-derived hepatocytes displayed equivalence to primary adult hepatocytes, following incubation with known hepatotoxins. In summary, we have developed a serum-free, scalable, and shippable cell-based model that faithfully predicts the potential for human liver injury. Such a resource has direct application in human modeling and, in the future, could play an important role in developing renewable cell-based therapies. PMID:24375539
Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data
Pagán, Josué; Irene De Orbe, M.; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L.; Vivancos Mora, J.; Moya, José M.; Ayala, José L.
2015-01-01
Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103
Accurate structure prediction of peptide–MHC complexes for identifying highly immunogenic antigens
Park, Min-Sun; Park, Sung Yong; Miller, Keith R.; Collins, Edward J.; Lee, Ha Youn
2013-11-01
Designing an optimal HIV-1 vaccine faces the challenge of identifying antigens that induce a broad immune capacity. One factor to control the breadth of T cell responses is the surface morphology of a peptide–MHC complex. Here, we present an in silico protocol for predicting peptide–MHC structure. A robust signature of a conformational transition was identified during all-atom molecular dynamics, which results in a model with high accuracy. A large test set was used in constructing our protocol and we went another step further using a blind test with a wild-type peptide and two highly immunogenic mutants, which predicted substantial conformational changes in both mutants. The center residues at position five of the analogs were configured to be accessible to solvent, forming a prominent surface, while the residue of the wild-type peptide was to point laterally toward the side of the binding cleft. We then experimentally determined the structures of the blind test set, using high resolution of X-ray crystallography, which verified predicted conformational changes. Our observation strongly supports a positive association of the surface morphology of a peptide–MHC complex to its immunogenicity. Our study offers the prospect of enhancing immunogenicity of vaccines by identifying MHC binding immunogens.
Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy
2014-07-01
With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions. PMID:24375512
NASA Astrophysics Data System (ADS)
Nissley, Daniel A.; Sharma, Ajeet K.; Ahmed, Nabeel; Friedrich, Ulrike A.; Kramer, Günter; Bukau, Bernd; O'Brien, Edward P.
2016-02-01
The rates at which domains fold and codons are translated are important factors in determining whether a nascent protein will co-translationally fold and function or misfold and malfunction. Here we develop a chemical kinetic model that calculates a protein domain's co-translational folding curve during synthesis using only the domain's bulk folding and unfolding rates and codon translation rates. We show that this model accurately predicts the course of co-translational folding measured in vivo for four different protein molecules. We then make predictions for a number of different proteins in yeast and find that synonymous codon substitutions, which change translation-elongation rates, can switch some protein domains from folding post-translationally to folding co-translationally--a result consistent with previous experimental studies. Our approach explains essential features of co-translational folding curves and predicts how varying the translation rate at different codon positions along a transcript's coding sequence affects this self-assembly process.
Techniques and resources for storm-scale numerical weather prediction
NASA Technical Reports Server (NTRS)
Droegemeier, Kelvin; Grell, Georg; Doyle, James; Soong, Su-Tzai; Skamarock, William; Bacon, David; Staniforth, Andrew; Crook, Andrew; Wilhelmson, Robert
1993-01-01
The topics discussed include the following: multiscale application of the 5th-generation PSU/NCAR mesoscale model, the coupling of nonhydrostatic atmospheric and hydrostatic ocean models for air-sea interaction studies; a numerical simulation of cloud formation over complex topography; adaptive grid simulations of convection; an unstructured grid, nonhydrostatic meso/cloud scale model; efficient mesoscale modeling for multiple scales using variable resolution; initialization of cloud-scale models with Doppler radar data; and making effective use of future computing architectures, networks, and visualization software.
Simple numerical method for predicting steady compressible flows
NASA Technical Reports Server (NTRS)
Vonlavante, Ernst; Nelson, N. Duane
1986-01-01
A numerical method for solving the isenthalpic form of the governing equations for compressible viscous and inviscid flows was developed. The method was based on the concept of flux vector splitting in its implicit form. The method was tested on several demanding inviscid and viscous configurations. Two different forms of the implicit operator were investigated. The time marching to steady state was accelerated by the implementation of the multigrid procedure. Its various forms very effectively increased the rate of convergence of the present scheme. High quality steady state results were obtained in most of the test cases; these required only short computational times due to the relative efficiency of the basic method.
Numerical routines for predicting ignition in pyrotechnic devices
Pierce, K.G.
1986-06-01
Two numerical models of the thermal processes leading to ignition in a pyrotechnic device have been developed. These models are based on finite difference approximations to the heat diffusion equation, with temperature-dependent thermal properties, in a single spatial coordinate. The derivation of the finite difference equations is discussed and the methods employed at boundaries and interfaces are given. The sources of the thermal-properties data are identified and how these data are used is explained. The program structure is explained and example runs of the programs are given.
Fang, Tao; Li, Wei; Gu, Fangwei; Li, Shuhua
2015-01-13
We extend the generalized energy-based fragmentation (GEBF) approach to molecular crystals under periodic boundary conditions (PBC), and we demonstrate the performance of the method for a variety of molecular crystals. With this approach, the lattice energy of a molecular crystal can be obtained from the energies of a series of embedded subsystems, which can be computed with existing advanced molecular quantum chemistry methods. The use of the field compensation method allows the method to take long-range electrostatic interaction of the infinite crystal environment into account and make the method almost translationally invariant. The computational cost of the present method scales linearly with the number of molecules in the unit cell. Illustrative applications demonstrate that the PBC-GEBF method with explicitly correlated quantum chemistry methods is capable of providing accurate descriptions on the lattice energies and structures for various types of molecular crystals. In addition, this approach can be employed to quantify the contributions of various intermolecular interactions to the theoretical lattice energy. Such qualitative understanding is very useful for rational design of molecular crystals. PMID:26574207
Wang, Zhiheng; Yang, Qianqian; Li, Tonghua; Cong, Peisheng
2015-01-01
The precise prediction of protein intrinsically disordered regions, which play a crucial role in biological procedures, is a necessary prerequisite to further the understanding of the principles and mechanisms of protein function. Here, we propose a novel predictor, DisoMCS, which is a more accurate predictor of protein intrinsically disordered regions. The DisoMCS bases on an original multi-class conservative score (MCS) obtained by sequence-order/disorder alignment. Initially, near-disorder regions are defined on fragments located at both the terminus of an ordered region connecting a disordered region. Then the multi-class conservative score is generated by sequence alignment against a known structure database and represented as order, near-disorder and disorder conservative scores. The MCS of each amino acid has three elements: order, near-disorder and disorder profiles. Finally, the MCS is exploited as features to identify disordered regions in sequences. DisoMCS utilizes a non-redundant data set as the training set, MCS and predicted secondary structure as features, and a conditional random field as the classification algorithm. In predicted near-disorder regions a residue is determined as an order or a disorder according to the optimized decision threshold. DisoMCS was evaluated by cross-validation, large-scale prediction, independent tests and CASP (Critical Assessment of Techniques for Protein Structure Prediction) tests. All results confirmed that DisoMCS was very competitive in terms of accuracy of prediction when compared with well-established publicly available disordered region predictors. It also indicated our approach was more accurate when a query has higher homologous with the knowledge database. Availability The DisoMCS is available at http://cal.tongji.edu.cn/disorder/. PMID:26090958
Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.
2014-01-28
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.
Oyeyemi, Victor B; Krisiloff, David B; Keith, John A; Libisch, Florian; Pavone, Michele; Carter, Emily A
2014-01-28
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs. PMID:25669533
NASA Astrophysics Data System (ADS)
Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.
2014-01-01
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.
Liu, Lili; Zhang, Zijun; Mei, Qian; Chen, Ming
2013-01-01
Predicting the subcellular localization of proteins conquers the major drawbacks of high-throughput localization experiments that are costly and time-consuming. However, current subcellular localization predictors are limited in scope and accuracy. In particular, most predictors perform well on certain locations or with certain data sets while poorly on others. Here, we present PSI, a novel high accuracy web server for plant subcellular localization prediction. PSI derives the wisdom of multiple specialized predictors via a joint-approach of group decision making strategy and machine learning methods to give an integrated best result. The overall accuracy obtained (up to 93.4%) was higher than best individual (CELLO) by ~10.7%. The precision of each predicable subcellular location (more than 80%) far exceeds that of the individual predictors. It can also deal with multi-localization proteins. PSI is expected to be a powerful tool in protein location engineering as well as in plant sciences, while the strategy employed could be applied to other integrative problems. A user-friendly web server, PSI, has been developed for free access at http://bis.zju.edu.cn/psi/. PMID:24194827
Numerical analysis and prediction of laser forming of thin plate
NASA Astrophysics Data System (ADS)
Tamsaout, Toufik; Amara, EL-Hachemi
2012-03-01
Laser forming is a technique consisting in the design and the construction of complex metallic work-pieces with special shapes, difficult to achieve with the conventional techniques. By using lasers, the main advantage of the process is that it is contactless and does not require any external force. It offers also more flexibility for a lower price. This kind of processing interests the industries that use the stamping or other costly ways for prototypes such as in the aero-spatial, automotive, naval and microelectronics industries. The analytical modeling of laser forming process is often complex or impossible to achieve, since the dimensions and the mechanical properties change with the time and in the space. Therefore, the numerical approach is more suitable for laser forming modeling. Our numerical study is divided into two models, the first one is a purely thermal treatment which allows the determination of the temperature field produced by a laser pass, and the second one consists in the thermomechanical coupling treatment. The temperature field resulting from the first stage is used to calculate the stress field, the deformations and the bending angle of the plate. The thermo-mechanical properties of material are isotropic, but temperature-dependant.
Numerical analysis and prediction of laser forming of thin plate
NASA Astrophysics Data System (ADS)
Tamsaout, Toufik; Amara, EL-Hachemi
2011-11-01
Laser forming is a technique consisting in the design and the construction of complex metallic work-pieces with special shapes, difficult to achieve with the conventional techniques. By using lasers, the main advantage of the process is that it is contactless and does not require any external force. It offers also more flexibility for a lower price. This kind of processing interests the industries that use the stamping or other costly ways for prototypes such as in the aero-spatial, automotive, naval and microelectronics industries. The analytical modeling of laser forming process is often complex or impossible to achieve, since the dimensions and the mechanical properties change with the time and in the space. Therefore, the numerical approach is more suitable for laser forming modeling. Our numerical study is divided into two models, the first one is a purely thermal treatment which allows the determination of the temperature field produced by a laser pass, and the second one consists in the thermomechanical coupling treatment. The temperature field resulting from the first stage is used to calculate the stress field, the deformations and the bending angle of the plate. The thermo-mechanical properties of material are isotropic, but temperature-dependant.
Convertino, Victor A; Wirt, Michael D; Glenn, John F; Lein, Brian C
2016-06-01
Shock is deadly and unpredictable if it is not recognized and treated in early stages of hemorrhage. Unfortunately, measurements of standard vital signs that are displayed on current medical monitors fail to provide accurate or early indicators of shock because of physiological mechanisms that effectively compensate for blood loss. As a result of new insights provided by the latest research on the physiology of shock using human experimental models of controlled hemorrhage, it is now recognized that measurement of the body's reserve to compensate for reduced circulating blood volume is the single most important indicator for early and accurate assessment of shock. We have called this function the "compensatory reserve," which can be accurately assessed by real-time measurements of changes in the features of the arterial waveform. In this paper, the physiology underlying the development and evaluation of a new noninvasive technology that allows for real-time measurement of the compensatory reserve will be reviewed, with its clinical implications for earlier and more accurate prediction of shock. PMID:26950588
NASA Astrophysics Data System (ADS)
Rahneshin, Vahid; Chierichetti, Maria
2016-09-01
In this paper, a combined numerical and experimental method, called Extended Load Confluence Algorithm, is presented to accurately predict the dynamic response of non-periodic structures when little or no information about the applied loads is available. This approach, which falls into the category of Shape Sensing methods, inputs limited experimental information acquired from sensors to a mapping algorithm that predicts the response at unmeasured locations. The proposed algorithm consists of three major cores: an experimental core for data acquisition, a numerical core based on Finite Element Method for modeling the structure, and a mapping algorithm that improves the numerical model based on a modal approach in the frequency domain. The robustness and precision of the proposed algorithm are verified through numerical and experimental examples. The results of this paper demonstrate that without a precise knowledge of the loads acting on the structure, the dynamic behavior of the system can be predicted in an effective and precise manner after just a few iterations.
2016-01-01
Visual field (VF) data were retrospectively obtained from 491 eyes in 317 patients with open angle glaucoma who had undergone ten VF tests (Humphrey Field Analyzer, 24-2, SITA standard). First, mean of total deviation values (mTD) in the tenth VF was predicted using standard linear regression of the first five VFs (VF1-5) through to using all nine preceding VFs (VF1-9). Then an 'intraocular pressure (IOP)-integrated VF trend analysis' was carried out by simply using time multiplied by IOP as the independent term in the linear regression model. Prediction errors (absolute prediction error or root mean squared error: RMSE) for predicting mTD and also point wise TD values of the tenth VF were obtained from both approaches. The mTD absolute prediction errors associated with the IOP-integrated VF trend analysis were significantly smaller than those from the standard trend analysis when VF1-6 through to VF1-8 were used (p < 0.05). The point wise RMSEs from the IOP-integrated trend analysis were significantly smaller than those from the standard trend analysis when VF1-5 through to VF1-9 were used (p < 0.05). This was especially the case when IOP was measured more frequently. Thus a significantly more accurate prediction of VF progression is possible using a simple trend analysis that incorporates IOP measurements. PMID:27562553
Asaoka, Ryo; Fujino, Yuri; Murata, Hiroshi; Miki, Atsuya; Tanito, Masaki; Mizoue, Shiro; Mori, Kazuhiko; Suzuki, Katsuyoshi; Yamashita, Takehiro; Kashiwagi, Kenji; Shoji, Nobuyuki
2016-01-01
Visual field (VF) data were retrospectively obtained from 491 eyes in 317 patients with open angle glaucoma who had undergone ten VF tests (Humphrey Field Analyzer, 24-2, SITA standard). First, mean of total deviation values (mTD) in the tenth VF was predicted using standard linear regression of the first five VFs (VF1-5) through to using all nine preceding VFs (VF1-9). Then an ‘intraocular pressure (IOP)-integrated VF trend analysis’ was carried out by simply using time multiplied by IOP as the independent term in the linear regression model. Prediction errors (absolute prediction error or root mean squared error: RMSE) for predicting mTD and also point wise TD values of the tenth VF were obtained from both approaches. The mTD absolute prediction errors associated with the IOP-integrated VF trend analysis were significantly smaller than those from the standard trend analysis when VF1-6 through to VF1-8 were used (p < 0.05). The point wise RMSEs from the IOP-integrated trend analysis were significantly smaller than those from the standard trend analysis when VF1-5 through to VF1-9 were used (p < 0.05). This was especially the case when IOP was measured more frequently. Thus a significantly more accurate prediction of VF progression is possible using a simple trend analysis that incorporates IOP measurements. PMID:27562553
NASA Astrophysics Data System (ADS)
Rajab, Jasim M.; MatJafri, M. Z.; Lim, H. S.
2013-06-01
This study encompasses columnar ozone modelling in the peninsular Malaysia. Data of eight atmospheric parameters [air surface temperature (AST), carbon monoxide (CO), methane (CH4), water vapour (H2Ovapour), skin surface temperature (SSKT), atmosphere temperature (AT), relative humidity (RH), and mean surface pressure (MSP)] data set, retrieved from NASA's Atmospheric Infrared Sounder (AIRS), for the entire period (2003-2008) was employed to develop models to predict the value of columnar ozone (O3) in study area. The combined method, which is based on using both multiple regressions combined with principal component analysis (PCA) modelling, was used to predict columnar ozone. This combined approach was utilized to improve the prediction accuracy of columnar ozone. Separate analysis was carried out for north east monsoon (NEM) and south west monsoon (SWM) seasons. The O3 was negatively correlated with CH4, H2Ovapour, RH, and MSP, whereas it was positively correlated with CO, AST, SSKT, and AT during both the NEM and SWM season periods. Multiple regression analysis was used to fit the columnar ozone data using the atmospheric parameter's variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to acquire subsets of the predictor variables to be comprised in the linear regression model of the atmospheric parameter's variables. It was found that the increase in columnar O3 value is associated with an increase in the values of AST, SSKT, AT, and CO and with a drop in the levels of CH4, H2Ovapour, RH, and MSP. The result of fitting the best models for the columnar O3 value using eight of the independent variables gave about the same values of the R (≈0.93) and R2 (≈0.86) for both the NEM and SWM seasons. The common variables that appeared in both regression equations were SSKT, CH4 and RH, and the principal precursor of the columnar O3 value in both the NEM and SWM seasons was SSKT.
Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.
2008-10-20
One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic
A Foundation for the Accurate Prediction of the Soft Error Vulnerability of Scientific Applications
Bronevetsky, G; de Supinski, B; Schulz, M
2009-02-13
Understanding the soft error vulnerability of supercomputer applications is critical as these systems are using ever larger numbers of devices that have decreasing feature sizes and, thus, increasing frequency of soft errors. As many large scale parallel scientific applications use BLAS and LAPACK linear algebra routines, the soft error vulnerability of these methods constitutes a large fraction of the applications overall vulnerability. This paper analyzes the vulnerability of these routines to soft errors by characterizing how their outputs are affected by injected errors and by evaluating several techniques for predicting how errors propagate from the input to the output of each routine. The resulting error profiles can be used to understand the fault vulnerability of full applications that use these routines.
Sequence features accurately predict genome-wide MeCP2 binding in vivo.
Rube, H Tomas; Lee, Wooje; Hejna, Miroslav; Chen, Huaiyang; Yasui, Dag H; Hess, John F; LaSalle, Janine M; Song, Jun S; Gong, Qizhi
2016-01-01
Methyl-CpG binding protein 2 (MeCP2) is critical for proper brain development and expressed at near-histone levels in neurons, but the mechanism of its genomic localization remains poorly understood. Using high-resolution MeCP2-binding data, we show that DNA sequence features alone can predict binding with 88% accuracy. Integrating MeCP2 binding and DNA methylation in a probabilistic graphical model, we demonstrate that previously reported genome-wide association with methylation is in part due to MeCP2's affinity to GC-rich chromatin, a result replicated using published data. Furthermore, MeCP2 co-localizes with nucleosomes. Finally, MeCP2 binding downstream of promoters correlates with increased expression in Mecp2-deficient neurons. PMID:27008915
nuMap: a web platform for accurate prediction of nucleosome positioning.
Alharbi, Bader A; Alshammari, Thamir H; Felton, Nathan L; Zhurkin, Victor B; Cui, Feng
2014-10-01
Nucleosome positioning is critical for gene expression and of major biological interest. The high cost of experimentally mapping nucleosomal arrangement signifies the need for computational approaches to predict nucleosome positions at high resolution. Here, we present a web-based application to fulfill this need by implementing two models, YR and W/S schemes, for the translational and rotational positioning of nucleosomes, respectively. Our methods are based on sequence-dependent anisotropic bending that dictates how DNA is wrapped around a histone octamer. This application allows users to specify a number of options such as schemes and parameters for threading calculation and provides multiple layout formats. The nuMap is implemented in Java/Perl/MySQL and is freely available for public use at http://numap.rit.edu. The user manual, implementation notes, description of the methodology and examples are available at the site. PMID:25220945
Sequence features accurately predict genome-wide MeCP2 binding in vivo
Rube, H. Tomas; Lee, Wooje; Hejna, Miroslav; Chen, Huaiyang; Yasui, Dag H.; Hess, John F.; LaSalle, Janine M.; Song, Jun S.; Gong, Qizhi
2016-01-01
Methyl-CpG binding protein 2 (MeCP2) is critical for proper brain development and expressed at near-histone levels in neurons, but the mechanism of its genomic localization remains poorly understood. Using high-resolution MeCP2-binding data, we show that DNA sequence features alone can predict binding with 88% accuracy. Integrating MeCP2 binding and DNA methylation in a probabilistic graphical model, we demonstrate that previously reported genome-wide association with methylation is in part due to MeCP2's affinity to GC-rich chromatin, a result replicated using published data. Furthermore, MeCP2 co-localizes with nucleosomes. Finally, MeCP2 binding downstream of promoters correlates with increased expression in Mecp2-deficient neurons. PMID:27008915
NASA Astrophysics Data System (ADS)
Hewson, T. D.
2003-04-01
Errors in numerical weather forecasts can be classed as random or systematic. In generating forecast guidance the forecaster ideally aims to remove the effect of systematic errors, and minimise any random errors. The end product typically comprises two components: (1) a 'most-likely' deterministic solution, and (2) guidance on possible spread around that solution. Ways in which operational forecast runs from around the world are combined with ensemble output to achieve this aim will be demonstrated, together with verification results that illustrate a generally positive impact. Examples of model systematic errors will be included. Examples will also be used to show the pitfalls of relying on just ensembles for longer ranges and just deterministic model output for short ranges. Whilst deterministic forecast output (1) is quite rigid in its definition, the spread component (2) is flexible. A bimodal distribution may warrant issue of an 'alternative solution', whilst potential severe weather is often reflected as a regional probability table. Examples will be presented. One software tool we use to highlight possible solutions is 'field modification', which enables dynamically consistent changes to be made to forecast temperature and wind fields, and also allows precipitation rates and types and cloud cover to be modified interactively. This will be illustrated. In the future there should be a strong push towards greater use of probabilistic output at short ranges, thereby countering any claims that we are trying to 'predict the unpredictable'.
Lift capability prediction for helicopter rotor blade-numerical evaluation
NASA Astrophysics Data System (ADS)
Rotaru, Constantin; Cîrciu, Ionicǎ; Luculescu, Doru
2016-06-01
The main objective of this paper is to describe the key physical features for modelling the unsteady aerodynamic effects found on helicopter rotor blade operating under nominally attached flow conditions away from stall. The unsteady effects were considered as phase differences between the forcing function and the aerodynamic response, being functions of the reduced frequency, the Mach number and the mode forcing. For a helicopter rotor, the reduced frequency at any blade element can't be exactly calculated but a first order approximation for the reduced frequency gives useful information about the degree of unsteadiness. The sources of unsteady effects were decomposed into perturbations to the local angle of attack and velocity field. The numerical calculus and graphics were made in FLUENT and MAPLE soft environments. This mathematical model is applicable for aerodynamic design of wind turbine rotor blades, hybrid energy systems optimization and aeroelastic analysis.
Predictive Lateral Logic for Numerical Entry Guidance Algorithms
NASA Technical Reports Server (NTRS)
Smith, Kelly M.
2016-01-01
Recent entry guidance algorithm development123 has tended to focus on numerical integration of trajectories onboard in order to evaluate candidate bank profiles. Such methods enjoy benefits such as flexibility to varying mission profiles and improved robustness to large dispersions. A common element across many of these modern entry guidance algorithms is a reliance upon the concept of Apollo heritage lateral error (or azimuth error) deadbands in which the number of bank reversals to be performed is non-deterministic. This paper presents a closed-loop bank reversal method that operates with a fixed number of bank reversals defined prior to flight. However, this number of bank reversals can be modified at any point, including in flight, based on contingencies such as fuel leaks where propellant usage must be minimized.
Numerical prediction and potential vorticity diagnosis of extratropical cyclones
NASA Astrophysics Data System (ADS)
Huo, Zonghui
By combining numerical simulations with different diagnostic tools, this thesis examines the various aspects of two explosively deepening cyclones-the superstorm of March 12-14 1993 and a storm that occurred during the Intensive Observation Period 14 (IOP-14) of the Canadian Atlantic Storm Program (CASP). Using conventional observations, the general aspects of the storms are documented and the dynamical and physical mechanisms are discussed. Then the life cycles are simulated with the Canadian Regional Finite-Element model. To improve the model initial conditions, a methodology is proposed on the basis of potential vorticity thinking, and is tested to be successful in the simulation of the March 1993 superstorm. Using the successful simulations as control runs, a series of numerical sensitivity experiments are conducted to study the impacts of model physics on the development of the two rapidly deepening cyclones. The deepening mechanisms of both storms are examined within the context of PV thinking, i.e., using piecewise potential vorticity inversion diagnostics. In both cases, the upper-level PV anomalies contribute the most to the surface cyclone, followed by the lower-level thermal anomalies and diabatic heating related moist PV anomaly. It is found that a favorable phase tilt between the upper- and lower-level PV anomalies allows a mutual interaction between them, in which the circulations associated with the upper-level anomalies enhance the lower-level anomalies, which in turn feedback positively into the upper-level PV anomalies. In addition to the vertical interactions, there also exist lateral interactions between the upper-level PV anomalies for the March 1993 superstorm. The upper-level PV features (troughs) are isolated with the piecewise PV inversion. By removing or changing the intensity of the trough in the initial conditions, the RFE model is integrated to examine the impact of each trough and its interaction with the other trough on the superstorm
NASA Astrophysics Data System (ADS)
Jee, J.-B.; Jeon, S.-H.; Choi, Y.-J.; Lee, K.-T.
2012-04-01
Solar energy is attenuated by absorbing gases (ozone, aerosol, water vapor and mixed gas) and cloud in the atmosphere and ambient topography. That energy is measured with solar instruments (pyranometer and phyheliometer) which are installed on the surface. However, solar energy based on observation is insufficient to represent detailed energy distribution, because the distributions of solar instruments are spatially limited. If input data of solar radiation model is accurate, the solar energy reaching at the surface can be calculated reasonably. In this study, input data of solar radiation model used satellites data and reanalysis data of numerical model prediction from 2000 to 2010. Recently, a variety of satellite measurements from TERA/AQUA (MODIS), AURA (OMI) and geostationary satellites (GMS-5, GOES-9, MTSAT-1R, MTSAT-2 and COMS) has been made available. Input data of solar radiation model can use aerosols and surface albedo data from MODIS, total ozone amount data from OMI and cloud fraction data from meteorological geostationary satellites. Also, reanalysis data of numerical prediction model is good to use as an input of solar radiation model. Several outputs can be used with surface temperature, pressure and total precipitable water of RDAPS (Regional Data Assimilation Prediction System) and KLAPS (Korean Local Assimilation Prediction System) models from KMA (Korea Meteorological Administration). In addition, the solar radiation model is equipped with topographic effect, which is the result of terrain shading or shielding the solar energy. Korean peninsula is composed of very complicated terrains. Therefore, considering the topographic effect is very important to calculate the solar energy at the surface. The hi-resolution DEM (Digital Elevation Model) is required to calculate the topographic effect. The solar radiation reaching at the surface is calculated by hour in temporal and 4 km × 4 km in spatial using solar radiation model and input data. These
Numerical simulation of a twin screw expander for performance prediction
NASA Astrophysics Data System (ADS)
Papes, Iva; Degroote, Joris; Vierendeels, Jan
2015-08-01
With the increasing use of twin screw expanders in waste heat recovery applications, the performance prediction of these machines plays an important role. This paper presents a mathematical model for calculating the performance of a twin screw expander. From the mass and energy conservation laws, differential equations are derived which are then solved together with the appropriate Equation of State in the instantaneous control volumes. Different flow processes that occur inside the screw expander such as filling (accompanied by a substantial pressure loss) and leakage flows through the clearances are accounted for in the model. The mathematical model employs all geometrical parameters such as chamber volume, suction and leakage areas. With R245fa as working fluid, the Aungier Redlich-Kwong Equation of State has been used in order to include real gas effects. To calculate the mass flow rates through the leakage paths formed inside the screw expander, flow coefficients are considered as constant and they are derived from 3D Computational Fluid Dynamic calculations at given working conditions and applied to all other working conditions. The outcome of the mathematical model is the P-V indicator diagram which is compared to CFD results of the same twin screw expander. Since CFD calculations require significant computational time, developed mathematical model can be used for the faster performance prediction.
Numerical Simulation of Bolide Entry with Ground Footprint Prediction
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Nemec, Marian; Mathias, Donovan L.; Berger, Marsha J.
2016-01-01
As they decelerate through the atmosphere, meteors deposit mass, momentum and energy into the surrounding air at tremendous rates. Trauma from the entry of such bolides produces strong blast waves that can propagate hundreds of kilometers and cause substantial terrestrial damage even when no ground impact occurs. We present a new simulation technique for airburst blast prediction using a fully-conservative, Cartesian mesh, finite-volume solver and investigate the ability of this method to model far- field propagation over hundreds of kilometers. The work develops mathematical models for the deposition of mass, momentum and energy into the atmosphere and presents verification and validation through canonical problems and the comparison of surface overpressures, and blast arrival times with actual results in the literature for known bolides. The discussion also examines the effects of various approximations to the physics of bolide entry that can substantially decrease the computational expense of these simulations. We present parametric studies to quantify the influence of entry-angle, burst-height and other parameters on the ground footprint of the airburst, and these values are related to predictions from analytic and handbook-methods.
Zhao, Li; Chen, Yiyun; Bajaj, Amol Onkar; Eblimit, Aiden; Xu, Mingchu; Soens, Zachry T; Wang, Feng; Ge, Zhongqi; Jung, Sung Yun; He, Feng; Li, Yumei; Wensel, Theodore G; Qin, Jun; Chen, Rui
2016-05-01
Proteomic profiling on subcellular fractions provides invaluable information regarding both protein abundance and subcellular localization. When integrated with other data sets, it can greatly enhance our ability to predict gene function genome-wide. In this study, we performed a comprehensive proteomic analysis on the light-sensing compartment of photoreceptors called the outer segment (OS). By comparing with the protein profile obtained from the retina tissue depleted of OS, an enrichment score for each protein is calculated to quantify protein subcellular localization, and 84% accuracy is achieved compared with experimental data. By integrating the protein OS enrichment score, the protein abundance, and the retina transcriptome, the probability of a gene playing an essential function in photoreceptor cells is derived with high specificity and sensitivity. As a result, a list of genes that will likely result in human retinal disease when mutated was identified and validated by previous literature and/or animal model studies. Therefore, this new methodology demonstrates the synergy of combining subcellular fractionation proteomics with other omics data sets and is generally applicable to other tissues and diseases. PMID:26912414
NASA Astrophysics Data System (ADS)
Balabin, Roman M.; Lomakina, Ekaterina I.
2009-08-01
Artificial neural network (ANN) approach has been applied to estimate the density functional theory (DFT) energy with large basis set using lower-level energy values and molecular descriptors. A total of 208 different molecules were used for the ANN training, cross validation, and testing by applying BLYP, B3LYP, and BMK density functionals. Hartree-Fock results were reported for comparison. Furthermore, constitutional molecular descriptor (CD) and quantum-chemical molecular descriptor (QD) were used for building the calibration model. The neural network structure optimization, leading to four to five hidden neurons, was also carried out. The usage of several low-level energy values was found to greatly reduce the prediction error. An expected error, mean absolute deviation, for ANN approximation to DFT energies was 0.6±0.2 kcal mol-1. In addition, the comparison of the different density functionals with the basis sets and the comparison of multiple linear regression results were also provided. The CDs were found to overcome limitation of the QD. Furthermore, the effective ANN model for DFT/6-311G(3df,3pd) and DFT/6-311G(2df,2pd) energy estimation was developed, and the benchmark results were provided.
Towards Accurate Prediction of Turbulent, Three-Dimensional, Recirculating Flows with the NCC
NASA Technical Reports Server (NTRS)
Iannetti, A.; Tacina, R.; Jeng, S.-M.; Cai, J.
2001-01-01
The National Combustion Code (NCC) was used to calculate the steady state, nonreacting flow field of a prototype Lean Direct Injection (LDI) swirler. This configuration used nine groups of eight holes drilled at a thirty-five degree angle to induce swirl. These nine groups created swirl in the same direction, or a corotating pattern. The static pressure drop across the holes was fixed at approximately four percent. Computations were performed on one quarter of the geometry, because the geometry is considered rotationally periodic every ninety degrees. The final computational grid used was approximately 2.26 million tetrahedral cells, and a cubic nonlinear k - epsilon model was used to model turbulence. The NCC results were then compared to time averaged Laser Doppler Velocimetry (LDV) data. The LDV measurements were performed on the full geometry, but four ninths of the geometry was measured. One-, two-, and three-dimensional representations of both flow fields are presented. The NCC computations compare both qualitatively and quantitatively well to the LDV data, but differences exist downstream. The comparison is encouraging, and shows that NCC can be used for future injector design studies. To improve the flow prediction accuracy of turbulent, three-dimensional, recirculating flow fields with the NCC, recommendations are given.
Slodownik, Dan; Grinberg, Igor; Spira, Ram M; Skornik, Yehuda; Goldstein, Ronald S
2009-04-01
The current standard method for predicting contact allergenicity is the murine local lymph node assay (LLNA). Public objection to the use of animals in testing of cosmetics makes the development of a system that does not use sentient animals highly desirable. The chorioallantoic membrane (CAM) of the chick egg has been extensively used for the growth of normal and transformed mammalian tissues. The CAM is not innervated, and embryos are sacrificed before the development of pain perception. The aim of this study was to determine whether the sensitization phase of contact dermatitis to known cosmetic allergens can be quantified using CAM-engrafted human skin and how these results compare with published EC3 data obtained with the LLNA. We studied six common molecules used in allergen testing and quantified migration of epidermal Langerhans cells (LC) as a measure of their allergic potency. All agents with known allergic potential induced statistically significant migration of LC. The data obtained correlated well with published data for these allergens generated using the LLNA test. The human-skin CAM model therefore has great potential as an inexpensive, non-radioactive, in vivo alternative to the LLNA, which does not require the use of sentient animals. In addition, this system has the advantage of testing the allergic response of human, rather than animal skin. PMID:19054059
Accurate prediction of the refractive index of polymers using first principles and data modeling
NASA Astrophysics Data System (ADS)
Afzal, Mohammad Atif Faiz; Cheng, Chong; Hachmann, Johannes
Organic polymers with a high refractive index (RI) have recently attracted considerable interest due to their potential application in optical and optoelectronic devices. The ability to tailor the molecular structure of polymers is the key to increasing the accessible RI values. Our work concerns the creation of predictive in silico models for the optical properties of organic polymers, the screening of large-scale candidate libraries, and the mining of the resulting data to extract the underlying design principles that govern their performance. This work was set up to guide our experimentalist partners and allow them to target the most promising candidates. Our model is based on the Lorentz-Lorenz equation and thus includes the polarizability and number density values for each candidate. For the former, we performed a detailed benchmark study of different density functionals, basis sets, and the extrapolation scheme towards the polymer limit. For the number density we devised an exceedingly efficient machine learning approach to correlate the polymer structure and the packing fraction in the bulk material. We validated the proposed RI model against the experimentally known RI values of 112 polymers. We could show that the proposed combination of physical and data modeling is both successful and highly economical to characterize a wide range of organic polymers, which is a prerequisite for virtual high-throughput screening.
Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke
2015-11-15
attributed to phantom setup errors due to the slightly deformable and flexible phantom extremities. The estimated site-specific safety buffer distance with 0.001% probability of collision for (gantry-to-couch, gantry-to-phantom) was (1.23 cm, 3.35 cm), (1.01 cm, 3.99 cm), and (2.19 cm, 5.73 cm) for treatment to the head, lung, and prostate, respectively. Automated delivery to all three treatment sites was completed in 15 min and collision free using a digital Linac. Conclusions: An individualized collision prediction model for the purpose of noncoplanar beam delivery was developed and verified. With the model, the study has demonstrated the feasibility of predicting deliverable beams for an individual patient and then guiding fully automated noncoplanar treatment delivery. This work motivates development of clinical workflows and quality assurance procedures to allow more extensive use and automation of noncoplanar beam geometries.
Shakibaee, Abolfazl; Faghihzadeh, Soghrat; Alishiri, Gholam Hossein; Ebrahimpour, Zeynab; Faradjzadeh, Shahram; Sobhani, Vahid; Asgari, Alireza
2015-01-01
Background: The body composition varies according to different life styles (i.e. intake calories and caloric expenditure). Therefore, it is wise to record military personnel’s body composition periodically and encourage those who abide to the regulations. Different methods have been introduced for body composition assessment: invasive and non-invasive. Amongst them, the Jackson and Pollock equation is most popular. Objectives: The recommended anthropometric prediction equations for assessing men’s body composition were compared with dual-energy X-ray absorptiometry (DEXA) gold standard to develop a modified equation to assess body composition and obesity quantitatively among Iranian military men. Patients and Methods: A total of 101 military men aged 23 - 52 years old with a mean age of 35.5 years were recruited and evaluated in the present study (average height, 173.9 cm and weight, 81.5 kg). The body-fat percentages of subjects were assessed both with anthropometric assessment and DEXA scan. The data obtained from these two methods were then compared using multiple regression analysis. Results: The mean and standard deviation of body fat percentage of the DEXA assessment was 21.2 ± 4.3 and body fat percentage obtained from three Jackson and Pollock 3-, 4- and 7-site equations were 21.1 ± 5.8, 22.2 ± 6.0 and 20.9 ± 5.7, respectively. There was a strong correlation between these three equations and DEXA (R² = 0.98). Conclusions: The mean percentage of body fat obtained from the three equations of Jackson and Pollock was very close to that of body fat obtained from DEXA; however, we suggest using a modified Jackson-Pollock 3-site equation for volunteer military men because the 3-site equation analysis method is simpler and faster than other methods. PMID:26715964
Predicting polarization performance of high-numerical aperture inspection lenses
NASA Astrophysics Data System (ADS)
Fahr, Stephan; Werschnik, Jan; Bening, Matthias; Uhlendorf, Kristina
2015-09-01
Along the course of increasing through-put and improving signal to noise ratio in optical wafer and mask inspection, demands on wave front aberrations and polarization characteristics are ever increasing. The system engineers and optical designers involved in the development of such optical systems will be responsible for specifying the quality of the optical material and the mechanical tolerances. Among optical designers it is well established how to estimate the wave front error of assembled and adjusted optical devices via sensitivity or Monte-Carlo analysis. However, when compared with the scalar problem of wave front estimation, the field of polarization control deems to pose a more complex problem due to its vectorial nature. Here we show our latest results in how to model polarization affecting aspects. In the realm of high numerical aperture (NA) inspection optics we will focus on the impact of coatings, stress induced birefringence due to non-perfect lens mounting, and finally the birefringence of the optical material. With all these tools at hand, we have a more complete understanding of the optical performance of our assembled optical systems. Moreover, we are able to coherently develop optical systems meeting demanding wave front criteria as well as high end polarization specifications.
Numerical prediction of subsidence with coupled geomechanical-hydrological modeling
Girrens, S.P.; Anderson, C.A.; Bennett, J.G.; Kramer, M.
1981-01-01
A coupled finite element geomechanical-hydrology code is currently under development for application to the problem of predicting groundwater disturbances associated with mine subsidence. The structural-fluid coupling is addressed by calculating the subsided mine geometry, with emphasis placed on determining the strata disturbance and locating damaged regions, for input into a hydrology code, which determines localized volume flow rates and aquifer fluctuations. Benefits from coupling will be best realized when field measurements, an additional aspect of the study concurrent with analytical investigations, indicating the relationship between increasing rock strain and increasing permeability are incorporated into hydraulic material descriptions. Hydrologic and structural calculations are presented to demonstrate computational capabilities applicable to mine subsidence.
Global numerical weather prediction at the National Meteorological Center
Kalnay, E.; Kanamitsu, M.; Baker, W.E. )
1990-10-01
The characteristics of the operational global analysis and prediction system at the National Meteorological Center (NMC), recent improvements, the performance in short-, medium-, and extended-range forecasting, and current areas of research are presented. Two types of global forecasts are produced daily at NMC: the aviation 3-day forecasts and the 10-day medium-range forecasts. Dynamic extended-range (more than 10 days) forecasting experiments are considered. The systematic characteristics of the NMC model climatology as shown in a 1 yr integration of a T40 model are reviewed, followed by a discussion of results from an extensive winter Dynamic Extended Range Forecast experiment performed during the winter of 1986/87 and a description of some recent experiments performed for the period of the North American drought of 1988. 57 refs.
NASA Astrophysics Data System (ADS)
Moore, Christopher; Hopkins, Matthew; Moore, Stan; Boerner, Jeremiah; Cartwright, Keith
2015-09-01
Simulation of breakdown is important for understanding and designing a variety of applications such as mitigating undesirable discharge events. Such simulations need to be accurate through early time arc initiation to late time stable arc behavior. Here we examine constraints on the timestep and mesh size required for arc simulations using the particle-in-cell (PIC) method with direct simulation Monte Carlo (DMSC) collisions. Accurate simulation of electron avalanche across a fixed voltage drop and constant neutral density (reduced field of 1000 Td) was found to require a timestep ~ 1/100 of the mean time between collisions and a mesh size ~ 1/25 the mean free path. These constraints are much smaller than the typical PIC-DSMC requirements for timestep and mesh size. Both constraints are related to the fact that charged particles are accelerated by the external field. Thus gradients in the electron energy distribution function can exist at scales smaller than the mean free path and these must be resolved by the mesh size for accurate collision rates. Additionally, the timestep must be small enough that the particle energy change due to the fields be small in order to capture gradients in the cross sections versus energy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Xin, Cui; Di-Yu, Zhang; Gao, Chen; Ji-Gen, Chen; Si-Liang, Zeng; Fu-Ming, Guo; Yu-Jun, Yang
2016-03-01
We demonstrate that the interference minima in the linear molecular harmonic spectra can be accurately predicted by a modified two-center model. Based on systematically investigating the interference minima in the linear molecular harmonic spectra by the strong-field approximation (SFA), it is found that the locations of the harmonic minima are related not only to the nuclear distance between the two main atoms contributing to the harmonic generation, but also to the symmetry of the molecular orbital. Therefore, we modify the initial phase difference between the double wave sources in the two-center model, and predict the harmonic minimum positions consistent with those simulated by SFA. Project supported by the National Basic Research Program of China (Grant No. 2013CB922200) and the National Natural Science Foundation of China (Grant Nos. 11274001, 11274141, 11304116, 11247024, and 11034003), and the Jilin Provincial Research Foundation for Basic Research, China (Grant Nos. 20130101012JC and 20140101168JC).
Deformation, Failure, and Fatigue Life of SiC/Ti-15-3 Laminates Accurately Predicted by MAC/GMC
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2002-01-01
NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) (ref.1) has been extended to enable fully coupled macro-micro deformation, failure, and fatigue life predictions for advanced metal matrix, ceramic matrix, and polymer matrix composites. Because of the multiaxial nature of the code's underlying micromechanics model, GMC--which allows the incorporation of complex local inelastic constitutive models--MAC/GMC finds its most important application in metal matrix composites, like the SiC/Ti-15-3 composite examined here. Furthermore, since GMC predicts the microscale fields within each constituent of the composite material, submodels for local effects such as fiber breakage, interfacial debonding, and matrix fatigue damage can and have been built into MAC/GMC. The present application of MAC/GMC highlights the combination of these features, which has enabled the accurate modeling of the deformation, failure, and life of titanium matrix composites.
NASA Astrophysics Data System (ADS)
Zhang, X.; Anagnostou, E. N.; Nikolopoulos, E. I.; Bartsotas, N. S.
2015-12-01
Floods constitute one of the most significant and frequent natural hazard in mountainous regions. Satellite-based precipitation products offer in many cases the only available source of QPE. However, satellite-based QPE over complex terrain suffer from significant bias that limits their applicability for hydrologic modeling. In this work we investigate the potential of a new correction procedure, which involves the use of high-resolution numerical weather prediction (NWP) model simulations to adjust satellite QPE. Adjustment is based on the pdf matching of satellite and NWP (used as reference) precipitation distribution. The impact of correction procedure on simulating the hydrologic response is examined for 15 storm events that generated floods over the mountainous Upper Adige region of Northern Italy. Atmospheric simulations were performed at 1-km resolution from a state-of-the-art atmospheric model (RAMS/ICLAMS). The proposed error correction procedure was then applied on the widely used TRMM 3B42 satellite precipitation product and the evaluation of the correction was based on independent in situ precipitation measurements from a dense rain gauge network (1 gauge / 70 km2) available in the study area. Satellite QPE, before and after correction, are used to simulate flood response using ARFFS (Adige River Flood Forecasting System), a semi-distributed hydrologic model, which is used for operational flood forecasting in the region. Results showed that bias in satellite QPE before correction was significant and had a tremendous impact on the simulation of flood peak, however the correction procedure was able to reduce bias in QPE and therefore improve considerably the simulated flood hydrograph.
Use of medium-range numerical weather prediction model output to produce forecasts of streamflow
Clark, M.P.; Hay, L.E.
2004-01-01
This paper examines an archive containing over 40 years of 8-day atmospheric forecasts over the contiguous United States from the NCEP reanalysis project to assess the possibilities for using medium-range numerical weather prediction model output for predictions of streamflow. This analysis shows the biases in the NCEP forecasts to be quite extreme. In many regions, systematic precipitation biases exceed 100% of the mean, with temperature biases exceeding 3??C. In some locations, biases are even higher. The accuracy of NCEP precipitation and 2-m maximum temperature forecasts is computed by interpolating the NCEP model output for each forecast day to the location of each station in the NWS cooperative network and computing the correlation with station observations. Results show that the accuracy of the NCEP forecasts is rather low in many areas of the country. Most apparent is the generally low skill in precipitation forecasts (particularly in July) and low skill in temperature forecasts in the western United States, the eastern seaboard, and the southern tier of states. These results outline a clear need for additional processing of the NCEP Medium-Range Forecast Model (MRF) output before it is used for hydrologic predictions. Techniques of model output statistics (MOS) are used in this paper to downscale the NCEP forecasts to station locations. Forecasted atmospheric variables (e.g., total column precipitable water, 2-m air temperature) are used as predictors in a forward screening multiple linear regression model to improve forecasts of precipitation and temperature for stations in the National Weather Service cooperative network. This procedure effectively removes all systematic biases in the raw NCEP precipitation and temperature forecasts. MOS guidance also results in substantial improvements in the accuracy of maximum and minimum temperature forecasts throughout the country. For precipitation, forecast improvements were less impressive. MOS guidance increases
Climate simulation and numerical weather prediction using GPUs
NASA Astrophysics Data System (ADS)
Lapillonne, Xavier; Fuhrer, Oliver; Ruedisuehli, Stefan; Arteaga, Andrea; Osuna, Carlos; Walser, Andre; Leuenberger, Daniel
2015-04-01
After the successful development of a prototype GPU version of the atmospheric model COSMO, the COSMO Consortium has decided to bring these developments back to the official version in order to have an operational GPU-capable model for climate and weather prediction. The implementation is designed so as to avoid costly data transfer between the GPU and the CPU and achieve best performance. To this end, most parts of model are ported to GPU. Furthermore, the implementation has been specifically targeted for hardware architectures with fat nodes (nodes with multiple GPUs), which is very favourable in terms of minimizing the energy-to-solution metric. The dynamical core has been completely rewritten using a GPU-enabled domain-specific language. The rest of the model namely the physical parametrizations and the data assimilation are ported to GPU using the OpenACC compiler directives. In this contribution, we present the overall porting strategy as well as new features available on GPU in the latest version of the model in particular concerning the data assimilation. Performance and verification results obtained on several hybrid Cray systems are presented and compared against the current operational model version used at MeteoSwiss.
Wind field near complex terrain using numerical weather prediction model
NASA Astrophysics Data System (ADS)
Chim, Kin-Sang
results by Miles (1969) and Smith (1980, 1985), and the numerical results of Stein (1992), Miranda and James (1992) and Olaffson and Bougeault (1997). It is found that the simulated result in the present study is comparable with others. The fifth part is the construction of the regime diagram for the Lantau island of Hong Kong. All eight major wind directions are discussed.
Numerical prediction of the monsoon depression of 5-7 July, 1979
NASA Technical Reports Server (NTRS)
Shukla, J.; Atlas, R.; Baker, W. E.
1981-01-01
The objective analysis and assimilation procedure with the FGGE/MONEX data are described. Numerical predictions with the GLAS general circulation model were made from the two initial conditions arrived at by assimilating the two different data sets. The model, the analysis and assimilation procedure, the differences in the analyses due to different data inputs, and the differences in the numerical prediction of monsoon depressions are outlined.
Feng, Hui; Jiang, Ni; Huang, Chenglong; Fang, Wei; Yang, Wanneng; Chen, Guoxing; Xiong, Lizhong; Liu, Qian
2013-09-01
Biomass is an important component of the plant phenomics, and the existing methods for biomass estimation for individual plants are either destructive or lack accuracy. In this study, a hyperspectral imaging system was developed for the accurate prediction of the above-ground biomass of individual rice plants in the visible and near-infrared spectral region. First, the structure of the system and the influence of various parameters on the camera acquisition speed were established. Then the system was used to image 152 rice plants, which selected from the rice mini-core collection, in two stages, the tillering to elongation (T-E) stage and the booting to heading (B-H) stage. Several variables were extracted from the images. Following, linear stepwise regression analysis and 5-fold cross-validation were used to select effective variables for model construction and test the stability of the model, respectively. For the T-E stage, the R(2) value was 0.940 for the fresh weight (FW) and 0.935 for the dry weight (DW). For the B-H stage, the R(2) value was 0.891 for the FW and 0.783 for the DW. Moreover, estimations of the biomass using visible light images were also calculated. These comparisons showed that hyperspectral imaging performed better than the visible light imaging. Therefore, this study provides not only a stable hyperspectral imaging platform but also an accurate and nondestructive method for the prediction of biomass for individual rice plants. PMID:24089866
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. PMID:26121186
TURBULENT LINEWIDTHS IN PROTOPLANETARY DISKS: PREDICTIONS FROM NUMERICAL SIMULATIONS
Simon, Jacob B.; Beckwith, Kris; Armitage, Philip J.
2011-12-10
Submillimeter observations of protoplanetary disks now approach the acuity needed to measure the turbulent broadening of molecular lines. These measurements constrain disk angular momentum transport, and furnish evidence of the turbulent environment within which planetesimal formation takes place. We use local magnetohydrodynamic (MHD) simulations of the magnetorotational instability (MRI) to predict the distribution of turbulent velocities in low-mass protoplanetary disks, as a function of radius and height above the mid-plane. We model both ideal MHD disks and disks in which Ohmic dissipation results in a dead zone of suppressed turbulence near the mid-plane. Under ideal conditions, the disk mid-plane is characterized by a velocity distribution that peaks near v {approx_equal} 0.1c{sub s} (where c{sub s} is the local sound speed), while supersonic velocities are reached at z > 3H (where H is the vertical pressure scale height). Residual velocities of v Almost-Equal-To 10{sup -2} c{sub s} persist near the mid-plane in dead zones, while the surface layers remain active. Anisotropic variation of the linewidth with disk inclination is modest. We compare our MHD results to hydrodynamic simulations in which large-scale forcing is used to initiate similar turbulent velocities. We show that the qualitative trend of increasing v with height, seen in the MHD case, persists for forced turbulence and is likely a generic property of disk turbulence. Percentage level determinations of v at different heights within the disk, or spatially resolved observations that probe the inner disk containing the dead zone region, are therefore needed to test whether the MRI is responsible for protoplanetary disk turbulence.
Hughes, Timothy J; Kandathil, Shaun M; Popelier, Paul L A
2015-02-01
As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G(**), B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol(-1), decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol(-1). PMID:24274986
Nissley, Daniel A; Sharma, Ajeet K; Ahmed, Nabeel; Friedrich, Ulrike A; Kramer, Günter; Bukau, Bernd; O'Brien, Edward P
2016-01-01
The rates at which domains fold and codons are translated are important factors in determining whether a nascent protein will co-translationally fold and function or misfold and malfunction. Here we develop a chemical kinetic model that calculates a protein domain's co-translational folding curve during synthesis using only the domain's bulk folding and unfolding rates and codon translation rates. We show that this model accurately predicts the course of co-translational folding measured in vivo for four different protein molecules. We then make predictions for a number of different proteins in yeast and find that synonymous codon substitutions, which change translation-elongation rates, can switch some protein domains from folding post-translationally to folding co-translationally--a result consistent with previous experimental studies. Our approach explains essential features of co-translational folding curves and predicts how varying the translation rate at different codon positions along a transcript's coding sequence affects this self-assembly process. PMID:26887592
Nissley, Daniel A.; Sharma, Ajeet K.; Ahmed, Nabeel; Friedrich, Ulrike A.; Kramer, Günter; Bukau, Bernd; O'Brien, Edward P.
2016-01-01
The rates at which domains fold and codons are translated are important factors in determining whether a nascent protein will co-translationally fold and function or misfold and malfunction. Here we develop a chemical kinetic model that calculates a protein domain's co-translational folding curve during synthesis using only the domain's bulk folding and unfolding rates and codon translation rates. We show that this model accurately predicts the course of co-translational folding measured in vivo for four different protein molecules. We then make predictions for a number of different proteins in yeast and find that synonymous codon substitutions, which change translation-elongation rates, can switch some protein domains from folding post-translationally to folding co-translationally—a result consistent with previous experimental studies. Our approach explains essential features of co-translational folding curves and predicts how varying the translation rate at different codon positions along a transcript's coding sequence affects this self-assembly process. PMID:26887592
Bigdeli, T. Bernard; Lee, Donghyung; Webb, Bradley Todd; Riley, Brien P.; Vladimirov, Vladimir I.; Fanous, Ayman H.; Kendler, Kenneth S.; Bacanu, Silviu-Alin
2016-01-01
Motivation: For genetic studies, statistically significant variants explain far less trait variance than ‘sub-threshold’ association signals. To dimension follow-up studies, researchers need to accurately estimate ‘true’ effect sizes at each SNP, e.g. the true mean of odds ratios (ORs)/regression coefficients (RRs) or Z-score noncentralities. Naïve estimates of effect sizes incur winner’s curse biases, which are reduced only by laborious winner’s curse adjustments (WCAs). Given that Z-scores estimates can be theoretically translated on other scales, we propose a simple method to compute WCA for Z-scores, i.e. their true means/noncentralities. Results:WCA of Z-scores shrinks these towards zero while, on P-value scale, multiple testing adjustment (MTA) shrinks P-values toward one, which corresponds to the zero Z-score value. Thus, WCA on Z-scores scale is a proxy for MTA on P-value scale. Therefore, to estimate Z-score noncentralities for all SNPs in genome scans, we propose FDR Inverse Quantile Transformation (FIQT). It (i) performs the simpler MTA of P-values using FDR and (ii) obtains noncentralities by back-transforming MTA P-values on Z-score scale. When compared to competitors, realistic simulations suggest that FIQT is more (i) accurate and (ii) computationally efficient by orders of magnitude. Practical application of FIQT to Psychiatric Genetic Consortium schizophrenia cohort predicts a non-trivial fraction of sub-threshold signals which become significant in much larger supersamples. Conclusions: FIQT is a simple, yet accurate, WCA method for Z-scores (and ORs/RRs, via simple transformations). Availability and Implementation: A 10 lines R function implementation is available at https://github.com/bacanusa/FIQT. Contact: sabacanu@vcu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27187203
Steele, Mark A.; Forrester, Graham E.
2005-01-01
Field experiments provide rigorous tests of ecological hypotheses but are usually limited to small spatial scales. It is thus unclear whether these findings extrapolate to larger scales relevant to conservation and management. We show that the results of experiments detecting density-dependent mortality of reef fish on small habitat patches scale up to have similar effects on much larger entire reefs that are the size of small marine reserves and approach the scale at which some reef fisheries operate. We suggest that accurate scaling is due to the type of species interaction causing local density dependence and the fact that localized events can be aggregated to describe larger-scale interactions with minimal distortion. Careful extrapolation from small-scale experiments identifying species interactions and their effects should improve our ability to predict the outcomes of alternative management strategies for coral reef fishes and their habitats. PMID:16150721
Steele, Mark A; Forrester, Graham E
2005-09-20
Field experiments provide rigorous tests of ecological hypotheses but are usually limited to small spatial scales. It is thus unclear whether these findings extrapolate to larger scales relevant to conservation and management. We show that the results of experiments detecting density-dependent mortality of reef fish on small habitat patches scale up to have similar effects on much larger entire reefs that are the size of small marine reserves and approach the scale at which some reef fisheries operate. We suggest that accurate scaling is due to the type of species interaction causing local density dependence and the fact that localized events can be aggregated to describe larger-scale interactions with minimal distortion. Careful extrapolation from small-scale experiments identifying species interactions and their effects should improve our ability to predict the outcomes of alternative management strategies for coral reef fishes and their habitats. PMID:16150721
Sprenger, K G; Jaeger, Vance W; Pfaendtner, Jim
2015-05-01
We have applied molecular dynamics to calculate thermodynamic and transport properties of a set of 19 room-temperature ionic liquids. Since accurately simulating the thermophysical properties of solvents strongly depends upon the force field of choice, we tested the accuracy of the general AMBER force field, without refinement, for the case of ionic liquids. Electrostatic point charges were developed using ab initio calculations and a charge scaling factor of 0.8 to more accurately predict dynamic properties. The density, heat capacity, molar enthalpy of vaporization, self-diffusivity, and shear viscosity of the ionic liquids were computed and compared to experimentally available data, and good agreement across a wide range of cation and anion types was observed. Results show that, for a wide range of ionic liquids, the general AMBER force field, with no tuning of parameters, can reproduce a variety of thermodynamic and transport properties with similar accuracy to that of other published, often IL-specific, force fields. PMID:25853313
TIMP2•IGFBP7 biomarker panel accurately predicts acute kidney injury in high-risk surgical patients
Gunnerson, Kyle J.; Shaw, Andrew D.; Chawla, Lakhmir S.; Bihorac, Azra; Al-Khafaji, Ali; Kashani, Kianoush; Lissauer, Matthew; Shi, Jing; Walker, Michael G.; Kellum, John A.
2016-01-01
BACKGROUND Acute kidney injury (AKI) is an important complication in surgical patients. Existing biomarkers and clinical prediction models underestimate the risk for developing AKI. We recently reported data from two trials of 728 and 408 critically ill adult patients in whom urinary TIMP2•IGFBP7 (NephroCheck, Astute Medical) was used to identify patients at risk of developing AKI. Here we report a preplanned analysis of surgical patients from both trials to assess whether urinary tissue inhibitor of metalloproteinase 2 (TIMP-2) and insulin-like growth factor–binding protein 7 (IGFBP7) accurately identify surgical patients at risk of developing AKI. STUDY DESIGN We enrolled adult surgical patients at risk for AKI who were admitted to one of 39 intensive care units across Europe and North America. The primary end point was moderate-severe AKI (equivalent to KDIGO [Kidney Disease Improving Global Outcomes] stages 2–3) within 12 hours of enrollment. Biomarker performance was assessed using the area under the receiver operating characteristic curve, integrated discrimination improvement, and category-free net reclassification improvement. RESULTS A total of 375 patients were included in the final analysis of whom 35 (9%) developed moderate-severe AKI within 12 hours. The area under the receiver operating characteristic curve for [TIMP-2]•[IGFBP7] alone was 0.84 (95% confidence interval, 0.76–0.90; p < 0.0001). Biomarker performance was robust in sensitivity analysis across predefined subgroups (urgency and type of surgery). CONCLUSION For postoperative surgical intensive care unit patients, a single urinary TIMP2•IGFBP7 test accurately identified patients at risk for developing AKI within the ensuing 12 hours and its inclusion in clinical risk prediction models significantly enhances their performance. LEVEL OF EVIDENCE Prognostic study, level I. PMID:26816218
NASA Astrophysics Data System (ADS)
Theologou, I.; Patelaki, M.; Karantzalos, K.
2015-04-01
Assessing and monitoring water quality status through timely, cost effective and accurate manner is of fundamental importance for numerous environmental management and policy making purposes. Therefore, there is a current need for validated methodologies which can effectively exploit, in an unsupervised way, the enormous amount of earth observation imaging datasets from various high-resolution satellite multispectral sensors. To this end, many research efforts are based on building concrete relationships and empirical algorithms from concurrent satellite and in-situ data collection campaigns. We have experimented with Landsat 7 and Landsat 8 multi-temporal satellite data, coupled with hyperspectral data from a field spectroradiometer and in-situ ground truth data with several physico-chemical and other key monitoring indicators. All available datasets, covering a 4 years period, in our case study Lake Karla in Greece, were processed and fused under a quantitative evaluation framework. The performed comprehensive analysis posed certain questions regarding the applicability of single empirical models across multi-temporal, multi-sensor datasets towards the accurate prediction of key water quality indicators for shallow inland systems. Single linear regression models didn't establish concrete relations across multi-temporal, multi-sensor observations. Moreover, the shallower parts of the inland system followed, in accordance with the literature, different regression patterns. Landsat 7 and 8 resulted in quite promising results indicating that from the recreation of the lake and onward consistent per-sensor, per-depth prediction models can be successfully established. The highest rates were for chl-a (r2=89.80%), dissolved oxygen (r2=88.53%), conductivity (r2=88.18%), ammonium (r2=87.2%) and pH (r2=86.35%), while the total phosphorus (r2=70.55%) and nitrates (r2=55.50%) resulted in lower correlation rates.
Brezovský, Jan
2016-01-01
An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools’ predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations
Bendl, Jaroslav; Musil, Miloš; Štourač, Jan; Zendulka, Jaroslav; Damborský, Jiří; Brezovský, Jan
2016-05-01
An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools' predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations. To
Evaluating the use of high-resolution numerical weather forecast for debris flow prediction.
NASA Astrophysics Data System (ADS)
Nikolopoulos, Efthymios I.; Bartsotas, Nikolaos S.; Borga, Marco; Kallos, George
2015-04-01
The sudden occurrence combined with the high destructive power of debris flows pose a significant threat to human life and infrastructures. Therefore, developing early warning procedures for the mitigation of debris flows risk is of great economical and societal importance. Given that rainfall is the predominant factor controlling debris flow triggering, it is indisputable that development of effective debris flows warning procedures requires accurate knowledge of the properties (e.g. duration, intensity) of the triggering rainfall. Moreover, efficient and timely response of emergency operations depends highly on the lead-time provided by the warning systems. Currently, the majority of early warning systems for debris flows are based on nowcasting procedures. While the latter may be successful in predicting the hazard, they provide warnings with a relatively short lead-time (~6h). Increasing the lead-time is necessary in order to improve the pre-incident operations and communication of the emergency, thus coupling warning systems with weather forecasting is essential for advancing early warning procedures. In this work we evaluate the potential of using high-resolution (1km) rainfall fields forecasted with a state-of-the-art numerical weather prediction model (RAMS/ICLAMS), in order to predict the occurrence of debris flows. Analysis is focused over the Upper Adige region, Northeast Italy, an area where debris flows are frequent. Seven storm events that generated a large number (>80) of debris flows during the period 2007-2012 are analyzed. Radar-based rainfall estimates, available from the operational C-band radar located at Mt Macaion, are used as the reference to evaluate the forecasted rainfall fields. Evaluation is mainly focused on assessing the error in forecasted rainfall properties (magnitude, duration) and the correlation in space and time with the reference field. Results show that the forecasted rainfall fields captured very well the magnitude and
Harb, Moussab
2015-10-14
Using accurate first-principles quantum calculations based on DFT (including the DFPT) with the range-separated hybrid HSE06 exchange-correlation functional, we can predict the essential fundamental properties (such as bandgap, optical absorption co-efficient, dielectric constant, charge carrier effective masses and exciton binding energy) of two stable monoclinic vanadium oxynitride (VON) semiconductor crystals for solar energy conversion applications. In addition to the predicted band gaps in the optimal range for making single-junction solar cells, both polymorphs exhibit a relatively high absorption efficiency in the visible range, high dielectric constant, high charge carrier mobility and much lower exciton binding energy than the thermal energy at room temperature. Moreover, their optical absorption, dielectric and exciton dissociation properties were found to be better than those obtained for semiconductors frequently utilized in photovoltaic devices such as Si, CdTe and GaAs. These novel results offer a great opportunity for this stoichiometric VON material to be properly synthesized and considered as a new good candidate for photovoltaic applications. PMID:26351755
NASA Technical Reports Server (NTRS)
Thomas, P. D.
1980-01-01
A computer implemented numerical method for predicting the flow in and about an isolated three dimensional jet exhaust nozzle is summarized. The approach is based on an implicit numerical method to solve the unsteady Navier-Stokes equations in a boundary conforming curvilinear coordinate system. Recent improvements to the original numerical algorithm are summarized. Equations are given for evaluating nozzle thrust and discharge coefficient in terms of computed flowfield data. The final formulation of models that are used to simulate flow turbulence effect is presented. Results are presented from numerical experiments to explore the effect of various quantities on the rate of convergence to steady state and on the final flowfield solution. Detailed flowfield predictions for several two and three dimensional nozzle configurations are presented and compared with wind tunnel experimental data.
Numerical Prediction of Fatigue Damage Progress in Holed CFRP Laminates Using Cohesive Elements
NASA Astrophysics Data System (ADS)
Yashiro, Shigeki; Okabe, Tomonaga
This study presents a numerical simulation to predict damage progress in notched composite laminates under cyclic loading by using a cohesive zone model. A damage-mechanics concept was introduced directly into the fracture process in the cohesive elements in order to express crack growth by cyclic loading. This approach then conformed to the established damage mechanics and facilitated understanding the procedure and reducing computation costs. We numerically investigated the damage progress in holed CFRP cross-ply laminates under tensile cyclic loading and compared the predicted damage patterns with experiment results. The predicted damage patterns agreed with the experiment results that exhibited the extension of multiple types of damage (splits, transverse cracks, and delamination) near the hole. A numerical study indicated that the change in the distribution of in-plane shear stress due to delamination induced the extension of splits and transverse cracks near the hole.
NASA Astrophysics Data System (ADS)
Shauly, Eitan; Rotstein, Israel; Peltinov, Ram; Latinski, Sergei; Adan, Ofer; Levi, Shimon; Menadeva, Ovadya
2009-03-01
The continues transistors scaling efforts, for smaller devices, similar (or larger) drive current/um and faster devices, increase the challenge to predict and to control the transistor off-state current. Typically, electrical simulators like SPICE, are using the design intent (as-drawn GDS data). At more sophisticated cases, the simulators are fed with the pattern after lithography and etch process simulations. As the importance of electrical simulation accuracy is increasing and leakage is becoming more dominant, there is a need to feed these simulators, with more accurate information extracted from physical on-silicon transistors. Our methodology to predict changes in device performances due to systematic lithography and etch effects was used in this paper. In general, the methodology consists on using the OPCCmaxTM for systematic Edge-Contour-Extraction (ECE) from transistors, taking along the manufacturing and includes any image distortions like line-end shortening, corner rounding and line-edge roughness. These measurements are used for SPICE modeling. Possible application of this new metrology is to provide a-head of time, physical and electrical statistical data improving time to market. In this work, we applied our methodology to analyze a small and large array's of 2.14um2 6T-SRAM, manufactured using Tower Standard Logic for General Purposes Platform. 4 out of the 6 transistors used "U-Shape AA", known to have higher variability. The predicted electrical performances of the transistors drive current and leakage current, in terms of nominal values and variability are presented. We also used the methodology to analyze an entire SRAM Block array. Study of an isolation leakage and variability are presented.
Real-time zenith tropospheric delays in support of numerical weather prediction applications
NASA Astrophysics Data System (ADS)
Dousa, Jan; Vaclavovic, Pavel
2014-05-01
The Geodetic Observatory Pecný (GOP) routinely estimates near real-time zenith total delays (ZTD) from GPS permanent stations for assimilation in numerical weather prediction (NWP) models more than 12 years. Besides European regional, global and GPS and GLONASS solutions, we have recently developed real-time estimates aimed at supporting NWP nowcasting or severe weather event monitoring. While all previous solutions are based on data batch processing in a network mode, the real-time solution exploits real-time global orbits and clocks from the International GNSS Service (IGS) and Precise Point Positioning (PPP) processing strategy. New application G-Nut/Tefnut has been developed and real-time ZTDs have been continuously processed in the nine-month demonstration campaign (February-October, 2013) for selected 36 European and global stations. Resulting ZTDs can be characterized by mean standard deviations of 6-10 mm, but still remaining large biases up to 20 mm due to missing precise models in the software. These results fulfilled threshold requirements for the operational NWP nowcasting (i.e. 30 mm in ZTD). Since remaining ZTD biases can be effectively eliminated using the bias-reduction procedure prior to the assimilation, results are approaching the target requirements in terms of relative accuracy (i.e. 6 mm in ZTD). Real-time strategy and software are under the development and we foresee further improvements in reducing biases and in optimizing the accuracy within required timeliness. The real-time products from the International GNSS Service were found accurate and stable for supporting PPP-based tropospheric estimates for the NWP nowcasting.
NASA Astrophysics Data System (ADS)
McCormack, J. P.; Allen, D. R.; Coy, L.; Eckermann, S. D.; Stajner, I.
2005-12-01
The Ozone Mapping and Profiler Suite (OMPS) will deliver real-time ozone data for assimilation in numerical weather prediction (NWP) models. This information will benefit forecasts by improving the modeled stratospheric heating rates and providing better first-guess temperature profiles needed for infrared satellite radiance retrieval algorithms. Operational ozone data assimilation for NWP requires a fast, accurate treatment of stratospheric ozone photochemistry. We present results from the new NRL CHEM2D Ozone Photochemistry Parameterization (CHEM2D-OPP), which is based on output from the zonally averaged NRL-CHEM2D middle atmosphere photochemical-transport model. CHEM2D-OPP is a linearized parameterization of gas-phase stratospheric ozone photochemistry developed for NOGAPS-ALPHA, the Navy's prototype global high altitude NWP model. A recent study of NOGAPS-ALPHA ozone simulations found that a preliminary version of the CHEM2D-based photochemistry parameterization generally performed better than other current photochemistry schemes that are now widely used in operational NWP and data assimilation systems. A new, improved version of CHEM2D-OPP is now available. Here we report the first quantitative performance assessments of the updated CHEM2D-OPP package in the NRL Global Ozone Assimilation Testing System (GOATS). This study compares the mean differences between GOATS ozone analyses and SBUV/2 ozone measurements (both vertical profile and total column) during September 2002 using several different ozone photochemistry schemes. We find that CHEM2D-OPP generally delivers the best performance out of all the photochemistry schemes we tested. Future development plans for CHEM2D-OPP, such as interfacing it with a "cold tracer" parameterization for heterogeneous ozone-hole chemistry, will also be presented.
NASA Astrophysics Data System (ADS)
Mukkavilli, S. K.; Kay, M. J.; Taylor, R.; Prasad, A. A.; Troccoli, A.
2014-12-01
The Australian Solar Energy Forecasting System (ASEFS) project requires forecasting timeframes which range from nowcasting to long-term forecasts (minutes to two years). As concentrating solar power (CSP) plant operators are one of the key stakeholders in the national energy market, research and development enhancements for direct normal irradiance (DNI) forecasts is a major subtask. This project involves comparing different radiative scheme codes to improve day ahead DNI forecasts on the national supercomputing infrastructure running mesoscale simulations on NOAA's Weather Research & Forecast (WRF) model. ASEFS also requires aerosol data fusion for improving accurate representation of spatio-temporally variable atmospheric aerosols to reduce DNI bias error in clear sky conditions over southern Queensland & New South Wales where solar power is vulnerable to uncertainities from frequent aerosol radiative events such as bush fires and desert dust. Initial results from thirteen years of Bureau of Meteorology's (BOM) deseasonalised DNI and MODIS NASA-Terra aerosol optical depth (AOD) anomalies demonstrated strong negative correlations in north and southeast Australia along with strong variability in AOD (~0.03-0.05). Radiative transfer schemes, DNI and AOD anomaly correlations will be discussed for the population and transmission grid centric regions where current and planned CSP plants dispatch electricity to capture peak prices in the market. Aerosol and solar irradiance datasets include satellite and ground based assimilations from the national BOM, regional aerosol researchers and agencies. The presentation will provide an overview of this ASEFS project task on WRF and results to date. The overall goal of this ASEFS subtask is to develop a hybrid numerical weather prediction (NWP) and statistical/machine learning multi-model ensemble strategy that meets future operational requirements of CSP plant operators.
First principles predictions of intrinsic defects in aluminum arsenide, AlAs : numerical supplement.
Schultz, Peter Andrew
2012-04-01
This Report presents numerical tables summarizing properties of intrinsic defects in aluminum arsenide, AlAs, as computed by density functional theory. This Report serves as a numerical supplement to the results published in: P.A. Schultz, 'First principles predictions of intrinsic defects in Aluminum Arsenide, AlAs', Materials Research Society Symposia Proceedings 1370 (2011; SAND2011-2436C), and intended for use as reference tables for a defect physics package in device models.
A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals
NASA Astrophysics Data System (ADS)
Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.
1994-01-01
Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.
Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J
2015-09-30
database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887
Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.
2015-01-01
database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887
Numerical Prediction of Microstructure and Mechanical Properties During the Hot Stamping Process
NASA Astrophysics Data System (ADS)
Kan, Dongbin; Liu, Lizhong; Hu, Ping; Ma, Ning; Shen, Guozhe; Han, Xiaoqiang; Ying, Liang
2011-08-01
Numerical simulation and prediction of microstructures and mechanical properties of products is very important in product development of hot stamping parts. With this method we can easily design changes of hot stamping products' properties prior to the manufacturing stage and this offers noticeable time and cost savings. In the present work, the hot stamping process of a U-channel with 22MnB5 boron steels is simulated by using a coupled thermo-mechanical FEM program. Then with the temperature evolution results obtained from the simulation, a model is applied to predict the microstructure evolution during the hot stamping process and mechanical properties of this U-channel. The model consists of a phase transformation model and a mechanical properties prediction model. The phase transformation model which is proposed by Li et al is used to predict the austenite decomposition into ferrite, pearlite, and bainite during the cooling process. The diffusionless austenite-martensite transformation is modeled using the Koistinen and Marburger relation. The mechanical properties prediction model is applied to predict the products' hardness distribution. The numerical simulation is evaluated by comparing simulation results with the U-channel hot stamping experiment. The numerically obtained temperature history is basically in agreement with corresponding experimental observation. The evaluation indicates the feasibility of this set of methods to be used to guide the optimization of hot stamping process parameters and the design of hot stamping tools.
Mui, K W; Wong, L T; Chung, L Y
2009-11-01
Atmospheric visibility impairment has gained increasing concern as it is associated with the existence of a number of aerosols as well as common air pollutants and produces unfavorable conditions for observation, dispersion, and transportation. This study analyzed the atmospheric visibility data measured in urban and suburban Hong Kong (two selected stations) with respect to time-matched mass concentrations of common air pollutants including nitrogen dioxide (NO(2)), nitrogen monoxide (NO), respirable suspended particulates (PM(10)), sulfur dioxide (SO(2)), carbon monoxide (CO), and meteorological parameters including air temperature, relative humidity, and wind speed. No significant difference in atmospheric visibility was reported between the two measurement locations (p > or = 0.6, t test); and good atmospheric visibility was observed more frequently in summer and autumn than in winter and spring (p < 0.01, t test). It was also found that atmospheric visibility increased with temperature but decreased with the concentrations of SO(2), CO, PM(10), NO, and NO(2). The results showed that atmospheric visibility was season dependent and would have significant correlations with temperature, the mass concentrations of PM(10) and NO(2), and the air pollution index API (correlation coefficients mid R: R mid R: > or = 0.7, p < or = 0.0001, t test). Mathematical expressions catering to the seasonal variations of atmospheric visibility were thus proposed. By comparison, the proposed visibility prediction models were more accurate than some existing regional models. In addition to improving visibility prediction accuracy, this study would be useful for understanding the context of low atmospheric visibility, exploring possible remedial measures, and evaluating the impact of air pollution and atmospheric visibility impairment in this region. PMID:18951139
Webb-Robertson, Bobbie-Jo M.; Cannon, William R.; Oehmen, Christopher S.; Shah, Anuj R.; Gurumoorthi, Vidhya; Lipton, Mary S.; Waters, Katrina M.
2008-07-01
Motivation: The standard approach to identifying peptides based on accurate mass and elution time (AMT) compares these profiles obtained from a high resolution mass spectrometer to a database of peptides previously identified from tandem mass spectrometry (MS/MS) studies. It would be advantageous, with respect to both accuracy and cost, to only search for those peptides that are detectable by MS (proteotypic). Results: We present a Support Vector Machine (SVM) model that uses a simple descriptor space based on 35 properties of amino acid content, charge, hydrophilicity, and polarity for the quantitative prediction of proteotypic peptides. Using three independently derived AMT databases (Shewanella oneidensis, Salmonella typhimurium, Yersinia pestis) for training and validation within and across species, the SVM resulted in an average accuracy measure of ~0.8 with a standard deviation of less than 0.025. Furthermore, we demonstrate that these results are achievable with a small set of 12 variables and can achieve high proteome coverage. Availability: http://omics.pnl.gov/software/STEPP.php
Assesment of a soil moisture retrieval with numerical weather prediction model temperature
Technology Transfer Automated Retrieval System (TEKTRAN)
The effect of using a Numerical Weather Prediction (NWP) soil temperature product instead of estimates provided by concurrent 37 GHz data on satellite-based passive microwave retrieval of soil moisture retrieval was evaluated. This was prompted by the change in system configuration of preceding mult...
NASA Technical Reports Server (NTRS)
Baker, A. J.; Manhardt, P. D.; Orzechowski, J. A.
1979-01-01
A numerical solution algorithm is established for prediction of subsonic turbulent three-dimensional flows in aerodynamic configuration juncture regions. A turbulence closure model is established using the complete Reynolds stress. Pressure coupling is accomplished using the concepts of complementary and particular solutions to a Poisson equation. Specifications for data input juncture geometry modification are presented.
On the potential use of radar-derived information in operational numerical weather prediction
NASA Technical Reports Server (NTRS)
Mcpherson, R. D.
1986-01-01
Estimates of requirements likely to be levied on a new observing system for mesoscale meteonology are given. Potential observing systems for mesoscale numerical weather prediction are discussed. Thermodynamic profiler radiometers, infrared radiometer atmospheric sounders, Doppler radar wind profilers and surveillance radar, and moisture profilers are among the instruments described.
Zhang, Jin-Feng; Chen, Yao; Lin, Guo-Shi; Zhang, Jian-Dong; Tang, Wen-Long; Huang, Jian-Huang; Chen, Jin-Shou; Wang, Xing-Fu; Lin, Zhi-Xiong
2016-06-01
Interferon-induced protein with tetratricopeptide repeat 1 (IFIT1) plays a key role in growth suppression and apoptosis promotion in cancer cells. Interferon was reported to induce the expression of IFIT1 and inhibit the expression of O-6-methylguanine-DNA methyltransferase (MGMT).This study aimed to investigate the expression of IFIT1, the correlation between IFIT1 and MGMT, and their impact on the clinical outcome in newly diagnosed glioblastoma. The expression of IFIT1 and MGMT and their correlation were investigated in the tumor tissues from 70 patients with newly diagnosed glioblastoma. The effects on progression-free survival and overall survival were evaluated. Of 70 cases, 57 (81.4%) tissue samples showed high expression of IFIT1 by immunostaining. The χ(2) test indicated that the expression of IFIT1 and MGMT was negatively correlated (r = -0.288, P = .016). Univariate and multivariate analyses confirmed high IFIT1 expression as a favorable prognostic indicator for progression-free survival (P = .005 and .017) and overall survival (P = .001 and .001), respectively. Patients with 2 favorable factors (high IFIT1 and low MGMT) had an improved prognosis as compared with others. The results demonstrated significantly increased expression of IFIT1 in newly diagnosed glioblastoma tissue. The negative correlation between IFIT1 and MGMT expression may be triggered by interferon. High IFIT1 can be a predictive biomarker of favorable clinical outcome, and IFIT1 along with MGMT more accurately predicts prognosis in newly diagnosed glioblastoma. PMID:26980050
Analytical and numerical models to predict the behavior of unbonded flexible risers under torsion
NASA Astrophysics Data System (ADS)
Ren, Shao-fei; Xue, Hong-xiang; Tang, Wen-yong
2016-04-01
This paper presents analytical and numerical models to predict the behavior of unbonded flexible risers under torsion. The analytical model takes local bending and torsion of tensile armor wires into consideration, and equilibrium equations of forces and displacements of layers are deduced. The numerical model includes lay angle, cross-sectional profiles of carcass, pressure armor layer and contact between layers. Abaqus/Explicit quasi-static simulation and mass scaling are adopted to avoid convergence problem and excessive computation time caused by geometric and contact nonlinearities. Results show that local bending and torsion of helical strips may have great influence on torsional stiffness, but stress related to bending and torsion is negligible; the presentation of anti-friction tapes may have great influence both on torsional stiffness and stress; hysteresis of torsion-twist relationship under cyclic loading is obtained by numerical model, which cannot be predicted by analytical model because of the ignorance of friction between layers.
A New Objective Technique for Verifying Mesoscale Numerical Weather Prediction Models
NASA Technical Reports Server (NTRS)
Case, Jonathan L.; Manobianco, John; Lane, John E.; Immer, Christopher D.
2003-01-01
This report presents a new objective technique to verify predictions of the sea-breeze phenomenon over east-central Florida by the Regional Atmospheric Modeling System (RAMS) mesoscale numerical weather prediction (NWP) model. The Contour Error Map (CEM) technique identifies sea-breeze transition times in objectively-analyzed grids of observed and forecast wind, verifies the forecast sea-breeze transition times against the observed times, and computes the mean post-sea breeze wind direction and speed to compare the observed and forecast winds behind the sea-breeze front. The CEM technique is superior to traditional objective verification techniques and previously-used subjective verification methodologies because: It is automated, requiring little manual intervention, It accounts for both spatial and temporal scales and variations, It accurately identifies and verifies the sea-breeze transition times, and It provides verification contour maps and simple statistical parameters for easy interpretation. The CEM uses a parallel lowpass boxcar filter and a high-order bandpass filter to identify the sea-breeze transition times in the observed and model grid points. Once the transition times are identified, CEM fits a Gaussian histogram function to the actual histogram of transition time differences between the model and observations. The fitted parameters of the Gaussian function subsequently explain the timing bias and variance of the timing differences across the valid comparison domain. Once the transition times are all identified at each grid point, the CEM computes the mean wind direction and speed during the remainder of the day for all times and grid points after the sea-breeze transition time. The CEM technique performed quite well when compared to independent meteorological assessments of the sea-breeze transition times and results from a previously published subjective evaluation. The algorithm correctly identified a forecast or observed sea-breeze occurrence
NASA Astrophysics Data System (ADS)
Jošt, D.; Škerlavaj, A.; Morgut, M.; Mežnar, P.; Nobile, E.
2015-01-01
The paper presents numerical simulations of flow in a model of a high head Francis turbine and comparison of results to the measurements. Numerical simulations were done by two CFD (Computational Fluid Dynamics) codes, Ansys CFX and OpenFOAM. Steady-state simulations were performed by k-epsilon and SST model, while for transient simulations the SAS SST ZLES model was used. With proper grid refinement in distributor and runner and with taking into account losses in labyrinth seals very accurate prediction of torque on the shaft, head and efficiency was obtained. Calculated axial and circumferential velocity components on two planes in the draft tube matched well with experimental results.
Numerical prediction of the monsoon depression of 5-7 July 1979. [Monsoon Experiment (MONEX)
NASA Technical Reports Server (NTRS)
Shukla, J.; Atlas, R.; Baker, W. E.
1981-01-01
A well defined monsoon depression was used for two assimilation and forecast experiments: (1) using conventional surface and upper air data, (2) using these data plus Monex data. The data sets were assimilated and used with a general circulation model to make numerical predictions. The model, the analysis and assimilation procedure, the differences in the analyses due to different data inputs, and the differences in the numerical predictions are described. The MONEX data have a positive impact, although the differences after 24 hr are not significant. The MONEX assimilation does not agree with manual analysis location of depression center. The 2.5 x 3 deg horizontal resolution of the prediction model is too coarse. The assimilation of geopotential height data derived from satellite soundings generated gravity waves with amplitudes similar to the meteorologically significant features investigated.
Numerical criteria for the evaluation of ab initio predictions of protein structure.
Zemla, A; Venclovas, C; Reinhardt, A; Fidelis, K; Hubbard, T J
1997-01-01
As part of the CASP2 protein structure prediction experiment, a set of numerical criteria were defined for the evaluation of "ab initio" predictions. The evaluation package comprises a series of electronic submission formats, a submission validator, evaluation software, and a series of scripts to summarize the results for the CASP2 meeting and for presentation via the World Wide Web (WWW). The evaluation package is accessible for use on new predictions via WWW so that results can be compared to those submitted to CASP2. With further input from the community, the evaluation criteria are expected to evolve into a comprehensive set of measures capturing the overall quality of a prediction as well as critical detail essential for further development of prediction methods. We discuss present measures, limitations of the current criteria, and possible improvements. PMID:9485506
Costigan, K.R.; Flicker, D.G.
1995-09-01
The South Area of Tooele Army Depot is one of the US Army`s storage facilities for its stockpile of chemical weapon agents. The Department of Defense is preparing to destroy the aging stockpiles of lethal chemical munitions, which have existed since the end of World War II. Although the danger slight, accurate predictions of the wind fields in the valley and accurate dispersion calculations are important in the event of an accident involving toxic chemicals at the depot. In order to prepare for an emergency which might involve a release of toxic agents to the atmosphere, the Higher Order Turbulence Model for Atmospheric circulations (HOTMAC) and its companion code RAndom Particle and Diffusion (RAPTAD) have been adapted for use in predicting where dangerous amounts of these chemicals may travel. Both codes have been applied to a number of air quality studies in the past, including previous dispersion studies at Tooele.
Rorick, Amber; Michael, Matthew A.; Yang, Liu; Zhang, Yong
2015-01-01
Oxygen is an important element in most biologically significant molecules and experimental solid-state 17O NMR studies have provided numerous useful structural probes to study these systems. However, computational predictions of solid-state 17O NMR chemical shift tensor properties are still challenging in many cases and in particular each of the prior computational work is basically limited to one type of oxygen-containing systems. This work provides the first systematic study of the effects of geometry refinement, method and basis sets for metal and non-metal elements in both geometry optimization and NMR property calculations of some biologically relevant oxygen-containing compounds with a good variety of XO bonding groups, X= H, C, N, P, and metal. The experimental range studied is of 1455 ppm, a major part of the reported 17O NMR chemical shifts in organic and organometallic compounds. A number of computational factors towards relatively general and accurate predictions of 17O NMR chemical shifts were studied to provide helpful and detailed suggestions for future work. For the studied various kinds of oxygen-containing compounds, the best computational approach results in a theory-versus-experiment correlation coefficient R2 of 0.9880 and mean absolute deviation of 13 ppm (1.9% of the experimental range) for isotropic NMR shifts and R2 of 0.9926 for all shift tensor properties. These results shall facilitate future computational studies of 17O NMR chemical shifts in many biologically relevant systems, and the high accuracy may also help refinement and determination of active-site structures of some oxygen-containing substrate bound proteins. PMID:26274812
Harris, Adam; Harries, Priscilla
2016-01-01
overall accuracy being reported. Data were extracted using a standardised tool, by one reviewer, which could have introduced bias. Devising search terms for prognostic studies is challenging. Every attempt was made to devise search terms that were sufficiently sensitive to detect all prognostic studies; however, it remains possible that some studies were not identified. Conclusion Studies of prognostic accuracy in palliative care are heterogeneous, but the evidence suggests that clinicians’ predictions are frequently inaccurate. No sub-group of clinicians was consistently shown to be more accurate than any other. Implications of Key Findings Further research is needed to understand how clinical predictions are formulated and how their accuracy can be improved. PMID:27560380
Numerical predictions for planets in the debris discs of HD 202628 and HD 207129
NASA Astrophysics Data System (ADS)
Thilliez, E.; Maddison, S. T.
2016-04-01
Resolved debris disc images can exhibit a range of radial and azimuthal structures, including gaps and rings, which can result from planetary companions shaping the disc by their gravitational influence. Currently, there are no tools available to determine the architecture of potential companions from disc observations. Recent work by Rodigas, Malhotra & Hinz presents how one can estimate the maximum mass and minimum semimajor axis of a hidden planet empirically from the width of the disc in scattered light. In this work, we use the predictions of Rodigas et al. applied to two debris discs HD 202628 and HD 207129. We aim to test if the predicted orbits of the planets can explain the features of their debris disc, such as eccentricity and sharp inner edge. We first run dynamical simulations using the predicted planetary parameters of Rodigas et al., and then numerically search for better parameters. Using a modified N-body code including radiation forces, we perform simulations over a broad range of planet parameters and compare synthetics images from our simulations to the observations. We find that the observational features of HD 202628 can be reproduced with a planet five times smaller than expected, located 30 AU beyond the predicted value, while the best match for HD 207129 is for a planet located 5-10 AU beyond the predicted location with a smaller eccentricity. We conclude that the predictions of Rodigas et al. provide a good starting point but should be complemented by numerical simulations.
Mixing of a point-source indoor pollutant: Numerical predictions and comparison with experiments
Lobscheid, C.; Gadgil, A.J.
2002-01-01
In most practical estimates of indoor pollutant exposures, it is common to assume that the pollutant is uniformly and instantaneously mixed in the indoor space. It is also commonly known that this assumption is simplistic, particularly for point sources, and for short-term or localized indoor exposures. We report computational fluid dynamics (CFD) predictions of mixing time of a point-pulse release of a pollutant in an unventilated mechanically mixed isothermal room. We aimed to determine the adequacy of the standard RANS two-equation ({kappa}-{var_epsilon}) turbulence model to predict the mixing times under these conditions. The predictions were made for the twelve mixing time experiments performed by Drescher et al. (1995). We paid attention to adequate grid resolution, suppression of numerical diffusion, and careful simulation of the mechanical blowers used in the experiments. We found that the predictions are in good agreement with experimental measurements.
Operational numerical weather prediction on the CYBER 205 at the National Meteorological Center
NASA Technical Reports Server (NTRS)
Deaven, D.
1984-01-01
The Development Division of the National Meteorological Center (NMC), having the responsibility of maintaining and developing the numerical weather forecasting systems of the center, is discussed. Because of the mission of NMC data products must be produced reliably and on time twice daily free of surprises for forecasters. Personnel of Development Division are in a rather unique situation. They must develop new advanced techniques for numerical analysis and prediction utilizing current state-of-the-art techniques, and implement them in an operational fashion without damaging the operations of the center. With the computational speeds and resources now available from the CYBER 205, Development Division Personnel will be able to introduce advanced analysis and prediction techniques into the operational job suite without disrupting the daily schedule. The capabilities of the CYBER 205 are discussed.
NASA Astrophysics Data System (ADS)
Boyko, Oleksiy; Zheleznyak, Mark
2015-04-01
The original numerical code TOPKAPI-IMMS of the distributed rainfall-runoff model TOPKAPI ( Todini et al, 1996-2014) is developed and implemented in Ukraine. The parallel version of the code has been developed recently to be used on multiprocessors systems - multicore/processors PC and clusters. Algorithm is based on binary-tree decomposition of the watershed for the balancing of the amount of computation for all processors/cores. Message passing interface (MPI) protocol is used as a parallel computing framework. The numerical efficiency of the parallelization algorithms is demonstrated for the case studies for the flood predictions of the mountain watersheds of the Ukrainian Carpathian regions. The modeling results is compared with the predictions based on the lumped parameters models.
One-level prediction-A numerical method for estimating undiscovered metal endowment
McCammon, R.B.; Kork, J.O.
1992-01-01
One-level prediction has been developed as a numerical method for estimating undiscovered metal endowment within large areas. The method is based on a presumed relationship between a numerical measure of geologic favorability and the spatial distribution of metal endowment. Metal endowment within an unexplored area for which the favorability measure is greater than a favorability threshold level is estimated to be proportional to the area of that unexplored portion. The constant of proportionality is the ratio of the discovered endowment found within a suitably chosen control region, which has been explored, to the area of that explored region. In addition to the estimate of undiscovered endowment, a measure of the error of the estimate is also calculated. One-level prediction has been used to estimate the undiscovered uranium endowment in the San Juan basin, New Mexico, U.S.A. A subroutine to perform the necessary calculations is included. ?? 1992 Oxford University Press.
Development of a numerical method for the prediction of turbulent flows in dump diffusers
NASA Astrophysics Data System (ADS)
Ando, Yasunori; Kawai, Masafumi; Sato, Yukinori; Toh, Hidemi
1987-01-01
In order to obtain an effective tool to design dump diffusers for gas turbine combustors, a finite-volume numerical calculation method has been developed for the solution of two-dimensional/axisymmetric incompressible steady Navier-Stokes equation in general curvilinear coordinate system. This method was applied to the calculations of turbulent flows in a two-dimensional dump diffuser with uniform and distorted inlet velocity profiles as well as an annular dump diffuser with uniform inlet velocity profile, and the calculated results were compared with experimental data. The numerical results showed a good agreement with experimental data in case of both inlet velocity profiles; eventually, the numerical method was confirmed to be an effective tool for the development of dump diffusers which can predict the flow pattern, velocity distribution and the pressure loss.
NASA Astrophysics Data System (ADS)
Yang, J.; Astitha, M.; Anagnostou, E. N.; Hartman, B.; Kallos, G. B.
2015-12-01
Weather prediction accuracy has become very important for the Northeast U.S. given the devastating effects of extreme weather events in the recent years. Weather forecasting systems are used towards building strategies to prevent catastrophic losses for human lives and the environment. Concurrently, weather forecast tools and techniques have evolved with improved forecast skill as numerical prediction techniques are strengthened by increased super-computing resources. In this study, we examine the combination of two state-of-the-science atmospheric models (WRF and RAMS/ICLAMS) by utilizing a Bayesian regression approach to improve the prediction of extreme weather events for NE U.S. The basic concept behind the Bayesian regression approach is to take advantage of the strengths of two atmospheric modeling systems and, similar to the multi-model ensemble approach, limit their weaknesses which are related to systematic and random errors in the numerical prediction of physical processes. The first part of this study is focused on retrospective simulations of seventeen storms that affected the region in the period 2004-2013. Optimal variances are estimated by minimizing the root mean square error and are applied to out-of-sample weather events. The applicability and usefulness of this approach are demonstrated by conducting an error analysis based on in-situ observations from meteorological stations of the National Weather Service (NWS) for wind speed and wind direction, and NCEP Stage IV radar data, mosaicked from the regional multi-sensor for precipitation. The preliminary results indicate a significant improvement in the statistical metrics of the modeled-observed pairs for meteorological variables using various combinations of the sixteen events as predictors of the seventeenth. This presentation will illustrate the implemented methodology and the obtained results for wind speed, wind direction and precipitation, as well as set the research steps that will be
vom Saal, Frederick S.; Welshons, Wade V.
2016-01-01
There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources. PMID:25304273
vom Saal, Frederick S; Welshons, Wade V
2014-12-01
There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources. PMID:25304273
Simulation studies of proposed observing systems and their impact on numerical weather prediction
NASA Technical Reports Server (NTRS)
Atlas, R.; Kalnay, E.; Susskind, J.; Baker, W. E.; Halem, M.
1984-01-01
A series of realistic simulation studies is being conducted as a cooperative effort between the European Centre for Medium Range Weather Forecasts (ECMWF), the National Meteorological Center (NMC), and the Goddard Laboratory for Atmospheric Sciences (GLAS) to provide a quantitative assessment of the potential impact of proposed observation systems on large scale numerical weather prediction. A special objective of this project is to avoid the unrealistic character of earlier simulation studies.
The role of radiation-dynamics interaction in regional numerical weather prediction
NASA Technical Reports Server (NTRS)
Chang, Chia-Bo
1988-01-01
The role of radiation-dynamics interaction in regional numerical weather prediction of severe storm environment and mesoscale convective systems over the United States is researched. Based upon the earlier numerical model simulation experiments, it is believed that such interaction can have a profound impact on the dynamics and thermodynamics of regional weather systems. The research will be carried out using real-data model forecast experiments performed on the Cray-X/MP computer. The forecasting system to be used is a comprehensive mesoscale prediction system which includes analysis and initialization, the dynamic model, and the post-forecast diagnosis codes. The model physics are currently undergoing many improvements in parameterizing radiation processes in the model atmosphere. The forecast experiments in conjunction with in-depth model verification and diagnosis are aimed at a quantitative understanding of the interaction between atmospheric radiation and regional dynamical processes in mesoscale models as well as in nature. Thus, significant advances in regional numerical weather prediction can be made. Results shall also provide valuable information for observational designs in the area of remote sensing techniques to study the characteristics of air-land thermal interaction and moist processes under various atmospheric conditions.
Analytic Formulation and Numerical Implementation of an Acoustic Pressure Gradient Prediction
NASA Technical Reports Server (NTRS)
Lee, Seongkyu; Brentner, Kenneth S.; Farassat, F.; Morris, Philip J.
2008-01-01
Two new analytical formulations of the acoustic pressure gradient have been developed and implemented in the PSU-WOPWOP rotor noise prediction code. The pressure gradient can be used to solve the boundary condition for scattering problems and it is a key aspect to solve acoustic scattering problems. The first formulation is derived from the gradient of the Ffowcs Williams-Hawkings (FW-H) equation. This formulation has a form involving the observer time differentiation outside the integrals. In the second formulation, the time differentiation is taken inside the integrals analytically. This formulation avoids the numerical time differentiation with respect to the observer time, which is computationally more efficient. The acoustic pressure gradient predicted by these new formulations is validated through comparison with available exact solutions for a stationary and moving monopole sources. The agreement between the predictions and exact solutions is excellent. The formulations are applied to the rotor noise problems for two model rotors. A purely numerical approach is compared with the analytical formulations. The agreement between the analytical formulations and the numerical method is excellent for both stationary and moving observer cases.
Probe measurements and numerical model predictions of evolving size distributions in premixed flames
De Filippo, A.; Sgro, L.A.; Lanzuolo, G.; D'Alessio, A.
2009-09-15
Particle size distributions (PSDs), measured with a dilution probe and a Differential Mobility Analyzer (DMA), and numerical predictions of these PSDs, based on a model that includes only coagulation or alternatively inception and coagulation, are compared to investigate particle growth processes and possible sampling artifacts in the post-flame region of a C/O = 0.65 premixed laminar ethylene-air flame. Inputs to the numerical model are the PSD measured early in the flame (the initial condition for the aerosol population) and the temperature profile measured along the flame's axial centerline. The measured PSDs are initially unimodal, with a modal mobility diameter of 2.2 nm, and become bimodal later in the post-flame region. The smaller mode is best predicted with a size-dependent coagulation model, which allows some fraction of the smallest particles to escape collisions without resulting in coalescence or coagulation through the size-dependent coagulation efficiency ({gamma}{sub SD}). Instead, when {gamma} = 1 and the coagulation rate is equal to the collision rate for all particles regardless of their size, the coagulation model significantly under predicts the number concentration of both modes and over predicts the size of the largest particles in the distribution compared to the measured size distributions at various heights above the burner. The coagulation ({gamma}{sub SD}) model alone is unable to reproduce well the larger particle mode (mode II). Combining persistent nucleation with size-dependent coagulation brings the predicted PSDs to within experimental error of the measurements, which seems to suggest that surface growth processes are relatively insignificant in these flames. Shifting measured PSDs a few mm closer to the burner surface, generally adopted to correct for probe perturbations, does not produce a better matching between the experimental and the numerical results. (author)
NASA Astrophysics Data System (ADS)
Fontana, A.; Marzari, F.
2016-05-01
Context. Planetesimals and planets embedded in a circumstellar disk are dynamically perturbed by the disk gravity. It causes an apsidal line precession at a rate that depends on the disk density profile and on the distance of the massive body from the star. Aims: Different analytical models are exploited to compute the precession rate of the perihelion ϖ˙. We compare them to verify their equivalence, in particular after analytical manipulations performed to derive handy formulas, and test their predictions against numerical models in some selected cases. Methods: The theoretical precession rates were computed with analytical algorithms found in the literature using the Mathematica symbolic code, while the numerical simulations were performed with the hydrodynamical code FARGO. Results: For low-mass bodies (planetesimals) the analytical approaches described in Binney & Tremaine (2008, Galactic Dynamics, p. 96), Ward (1981, Icarus, 47, 234), and Silsbee & Rafikov (2015a, ApJ, 798, 71) are equivalent under the same initial conditions for the disk in terms of mass, density profile, and inner and outer borders. They also match the numerical values computed with FARGO away from the outer border of the disk reasonably well. On the other hand, the predictions of the classical Mestel disk (Mestel 1963, MNRAS, 126, 553) for disks with p = 1 significantly depart from the numerical solution for radial distances beyond one-third of the disk extension because of the underlying assumption of the Mestel disk is that the outer disk border is equal to infinity. For massive bodies such as terrestrial and giant planets, the agreement of the analytical approaches is progressively poorer because of the changes in the disk structure that are induced by the planet gravity. For giant planets the precession rate changes sign and is higher than the modulus of the theoretical value by a factor ranging from 1.5 to 1.8. In this case, the correction of the formula proposed by Ward (1981) to
The Effect of Element Formulation on the Prediction of Boost Effects in Numerical Tube Bending
Bardelcik, A.; Worswick, M.J.
2005-08-05
This paper presents advanced FE models of the pre-bending process to investigate the effect of element formulation on the prediction of boost effects in tube bending. Tube bending experiments are conducted with 3'' (OD) IF (Interstitial-Free) steel tube on a fully instrumented Eagle EPT-75 servo-hydraulic mandrel-rotary draw tube bender. Experiments were performed in which the bending boost was varied at three levels and resulted in consistent trends in the strain and thickness distribution within the pre-bent tubes. A numerical model of the rotary draw tube bender was used to simulate pre-bending of the IF tube with the three levels of boost from the experiments. To examine the effect of element formulation on the prediction of boost, the tube was modeled with shell and solid elements. Both models predicted the overall strain and thickness results well, but showed different trends in each of the models.
NASA Astrophysics Data System (ADS)
Okabe, Tomonaga; Yashiro, Shigeki
This study proposes the cohesive zone model (CZM) for predicting fatigue damage growth in notched carbon-fiber-reinforced composite plastic (CFRP) cross-ply laminates. In this model, damage growth in the fracture process of cohesive elements due to cyclic loading is represented by the conventional damage mechanics model. We preliminarily investigated whether this model can appropriately express fatigue damage growth for a circular crack embedded in isotropic solid material. This investigation demonstrated that this model could reproduce the results with the well-established fracture mechanics model plus the Paris' law by tuning adjustable parameters. We then numerically investigated the damage process in notched CFRP cross-ply laminates under tensile cyclic loading and compared the predicted damage patterns with those in experiments reported by Spearing et al. (Compos. Sci. Technol. 1992). The predicted damage patterns agreed with the experiment results, which exhibited the extension of multiple types of damage (e.g., splits, transverse cracks and delaminations) near the notches.
NASA Astrophysics Data System (ADS)
Zhang, Na; Yao, Jun; Huang, Zhaoqin; Wang, Yueying
2013-06-01
Numerical simulation in naturally fractured media is challenging because of the coexistence of porous media and fractures on multiple scales that need to be coupled. We present a new approach to reservoir simulation that gives accurate resolution of both large-scale and fine-scale flow patterns. Multiscale methods are suitable for this type of modeling, because it enables capturing the large scale behavior of the solution without solving all the small features. Dual-porosity models in view of their strength and simplicity can be mainly used for sugar-cube representation of fractured media. In such a representation, the transfer function between the fracture and the matrix block can be readily calculated for water-wet media. For a mixed-wet system, the evaluation of the transfer function becomes complicated due to the effect of gravity. In this work, we use a multiscale finite element method (MsFEM) for two-phase flow in fractured media using the discrete-fracture model. By combining MsFEM with the discrete-fracture model, we aim towards a numerical scheme that facilitates fractured reservoir simulation without upscaling. MsFEM uses a standard Darcy model to approximate the pressure and saturation on a coarse grid, whereas fine scale effects are captured through basis functions constructed by solving local flow problems using the discrete-fracture model. The accuracy and the robustness of MsFEM are shown through several examples. In the first example, we consider several small fractures in a matrix and then compare the results solved by the finite element method. Then, we use the MsFEM in more complex models. The results indicate that the MsFEM is a promising path toward direct simulation of highly resolution geomodels.
Numerical predictions and experimental results of a dry bay fire environment.
Suo-Anttila, Jill Marie; Gill, Walter; Black, Amalia Rebecca
2003-11-01
The primary objective of the Safety and Survivability of Aircraft Initiative is to improve the safety and survivability of systems by using validated computational models to predict the hazard posed by a fire. To meet this need, computational model predictions and experimental data have been obtained to provide insight into the thermal environment inside an aircraft dry bay. The calculations were performed using the Vulcan fire code, and the experiments were completed using a specially designed full-scale fixture. The focus of this report is to present comparisons of the Vulcan results with experimental data for a selected test scenario and to assess the capability of the Vulcan fire field model to accurately predict dry bay fire scenarios. Also included is an assessment of the sensitivity of the fire model predictions to boundary condition distribution and grid resolution. To facilitate the comparison with experimental results, a brief description of the dry bay fire test fixture and a detailed specification of the geometry and boundary conditions are included. Overall, the Vulcan fire field model has shown the capability to predict the thermal hazard posed by a sustained pool fire within a dry bay compartment of an aircraft; although, more extensive experimental data and rigorous comparison are required for model validation.
NASA Astrophysics Data System (ADS)
Kavetski, D.; Clark, M. P.; Fenicia, F.
2011-12-01
Hydrologists often face sources of uncertainty that dwarf those normally encountered in many engineering and scientific disciplines. Especially when representing large scale integrated systems, internal heterogeneities such as stream networks, preferential flowpaths, vegetation, etc, are necessarily represented with a considerable degree of lumping. The inputs to these models are themselves often the products of sparse observational networks. Given the simplifications inherent in environmental models, especially lumped conceptual models, does it really matter how they are implemented? At the same time, given the complexities usually found in the response surfaces of hydrological models, increasingly sophisticated analysis methodologies are being proposed for sensitivity analysis, parameter calibration and uncertainty assessment. Quite remarkably, rather than being caused by the model structure/equations themselves, in many cases model analysis complexities are consequences of seemingly trivial aspects of the model implementation - often, literally, whether the start-of-step or end-of-step fluxes are used! The extent of problems can be staggering, including (i) degraded performance of parameter optimization and uncertainty analysis algorithms, (ii) erroneous and/or misleading conclusions of sensitivity analysis, parameter inference and model interpretations and, finally, (iii) poor reliability of a calibrated model in predictive applications. While the often nontrivial behavior of numerical approximations has long been recognized in applied mathematics and in physically-oriented fields of environmental sciences, it remains a problematic issue in many environmental modeling applications. Perhaps detailed attention to numerics is only warranted for complicated engineering models? Would not numerical errors be an insignificant component of total uncertainty when typical data and model approximations are present? Is this really a serious issue beyond some rare isolated
NASA Astrophysics Data System (ADS)
Duffy, C.; Leonard, L. N.; Ahalt, S.; Idaszak, R.; Tarboton, D.; Hooper, R. P.; Band, L. E.
2012-12-01
There is a clear national need to provide geoscience researchers with seamless and fast access to essential geo-spatial/geo-temporal data to support physics-based numerical models necessary to understand, predict and manage the nations surface and groundwater resources. Fundamental advances in science such as the evaluation of ecosystem and watershed services, the detection and attribution of the impact of climatic change, represent examples that will require high resolution spatially explicit assessments. In this paper we propose the concept of Essential Terrestrial Variables (ETV's), which we define as those variables that are nominally required to support watershed/catchment numerical prediction anywhere in the continental US and ultimately at the global scale. ETV's would represent a fundamental community resource necessary to build the products/parameters/forcings commonly used in distributed, fully-coupled watershed and river basin models. We argue that there are at last 3 fundamental issues that must be resolved before implementation of ETV's in support of a national water model: 1) data access and accessibility, 2) data scale and scalability, 3) community provenance and data sustainability. At the present time, there is no unified data infrastructure for supporting watershed models, and the data resource itself (weather/climate reanalysis products, stream flow, groundwater, soils, land cover, satellite data products, etc.) resides on many federal servers with limited or poorly organized access, with many data formats and without common geo-referencing. Beyond the problem of access to national data, the scale and scalability of computation for both data processing and model computational represents a major hurdle. This predicament is especially true since a full-scale national strategy for numerical watershed prediction will require data resources to reside very close to numerical model computation. Finally model/data provenance should be sufficient to allow
Post audit of a numerical prediction of wellfield drawdown in a semiconfined aquifer system
Stewart, M.; Langevin, C.
1999-01-01
A numerical ground water flow model was created in 1978 and revised in 1981 to predict the drawdown effects of a proposed municipal wellfield permitted to withdraw 30 million gallons per day (mgd; 1.1 x 105 m3/day) of water from the semiconfined Floridan Aquifer system. The predictions are based on the assumption that water levels in the semiconfined Floridan Aquifer reach a long-term, steady-state condition within a few days of initiation of pumping. Using this assumption, a 75 day simulation without water table recharge, pumping at the maximum permitted rates, was considered to represent a worst-case condition and the greatest drawdowns that could be experienced during wellfield operation. This method of predicting wellfield effects was accepted by the permitting agency. For this post audit, observed drawdowns were derived by taking the difference between pre-pumping and post-pumping potentiometric surface levels. Comparison of predicted and observed drawdowns suggests that actual drawdown over a 12 year period exceeds predicted drawdown by a factor of two or more. Analysis of the source of error in the 1981 predictions suggests that the values used for transmissivity, storativity, specific yield, and leakance are reasonable at the wellfield scale. Simulation using actual 1980-1992 pumping rates improves the agreement between predicted and observed drawdowns. The principal source of error is the assumption that water levels in a semiconfined aquifer achieve a steady-state condition after a few days or weeks of pumping. Simulations using a version of the 1981 model modified to include recharge and evapotranspiration suggest that it can take hundreds of days or several years for water levels in the linked Surficial and Floridan Aquifers to reach an apparent steady-state condition, and that slow declines in levels continue for years after the initiation of pumping. While the 1981 'impact' model can be used for reasonably predicting short-term, wellfield
Near-fault earthquake ground motion prediction by a high-performance spectral element numerical code
Paolucci, Roberto; Stupazzini, Marco
2008-07-08
Near-fault effects have been widely recognised to produce specific features of earthquake ground motion, that cannot be reliably predicted by 1D seismic wave propagation modelling, used as a standard in engineering applications. These features may have a relevant impact on the structural response, especially in the nonlinear range, that is hard to predict and to be put in a design format, due to the scarcity of significant earthquake records and of reliable numerical simulations. In this contribution a pilot study is presented for the evaluation of seismic ground-motions in the near-fault region, based on a high-performance numerical code for 3D seismic wave propagation analyses, including the seismic fault, the wave propagation path and the near-surface geological or topographical irregularity. For this purpose, the software package GeoELSE is adopted, based on the spectral element method. The set-up of the numerical benchmark of 3D ground motion simulation in the valley of Grenoble (French Alps) is chosen to study the effect of the complex interaction between basin geometry and radiation mechanism on the variability of earthquake ground motion.
Analytic Formulation and Numerical Implementation of an Acoustic Pressure Gradient Prediction
NASA Technical Reports Server (NTRS)
Lee, Seongkyu; Brentner, Kenneth S.; Farassat, Fereidoun
2007-01-01
The scattering of rotor noise is an area that has received little attention over the years, yet the limited work that has been done has shown that both the directivity and intensity of the acoustic field may be significantly modified by the presence of scattering bodies. One of the inputs needed to compute the scattered acoustic field is the acoustic pressure gradient on a scattering surface. Two new analytical formulations of the acoustic pressure gradient have been developed and implemented in the PSU-WOPWOP rotor noise prediction code. These formulations are presented in this paper. The first formulation is derived by taking the gradient of Farassat's retarded-time Formulation 1A. Although this formulation is relatively simple, it requires numerical time differentiation of the acoustic integrals. In the second formulation, the time differentiation is taken inside the integrals analytically. The acoustic pressure gradient predicted by these new formulations is validated through comparison with the acoustic pressure gradient determined by a purely numerical approach for two model rotors. The agreement between analytic formulations and numerical method is excellent for both stationary and moving observers case.
Numerical prediction of blast-induced stress wave from large-scale underground explosion
NASA Astrophysics Data System (ADS)
Wu, Chengqing; Lu, Yong; Hao, Hong
2004-01-01
This paper presents a numerical model for predicting the dynamic response of rock mass subjected to large-scale underground explosion. The model is calibrated against data obtained from large-scale field tests. The Hugoniot equation of state for rock mass is adopted to calculate the pressure as a function of mass density. A piecewise linear Drucker-Prager strength criterion including the strain rate effect is employed to model the rock mass behaviour subjected to blast loading. A double scalar damage model accounting for both the compression and tension damage is introduced to simulate the damage zone around the charge chamber caused by blast loading. The model is incorporated into Autodyn3D through its user subroutines. The numerical model is then used to predict the dynamic response of rock mass, in terms of the peak particle velocity (PPV) and peak particle acceleration (PPA) attenuation laws, the damage zone, the particle velocity time histories and their frequency contents for large-scale underground explosion tests. The computed results are found in good agreement with the field measured data; hence, the proposed model is proven to be adequate for simulating the dynamic response of rock mass subjected to large-scale underground explosion. Extended numerical analyses indicate that, apart from the charge loading density, the stress wave intensity is also affected, but to a lesser extent, by the charge weight and the charge chamber geometry for large-scale underground explosions. Copyright
On vortex loops and filaments: three examples of numerical predictions of flows containing vortices.
Krause, Egon
2003-01-01
Vortex motion plays a dominant role in many flow problems. This article aims at demonstrating some of the characteristic features of vortices with the aid of numerical solutions of the governing equations of fluid mechanics, the Navier-Stokes equations. Their discretized forms will first be reviewed briefly. Thereafter three problems of fluid flow involving vortex loops and filaments are discussed. In the first, the time-dependent motion and the mutual interaction of two colliding vortex rings are discussed, predicted in good agreement with experimental observations. The second example shows how vortex rings are generated, move, and interact with each other during the suction stroke in the cylinder of an automotive engine. The numerical results, validated with experimental data, suggest that vortex rings can be used to influence the spreading of the fuel droplets prior to ignition and reduce the fuel consumption. In the third example, it is shown that vortices can also occur in aerodynamic flows over delta wings at angle of attack as well as pipe flows: of particular interest for technical applications of these flows is the situation in which the vortex cores are destroyed, usually referred to as vortex breakdown or bursting. Although reliable breakdown criteria could not be established as yet, the numerical predictions obtained so far are found to agree well with the few experimental data available in the recent literature. PMID:12545239
Numerical Prediction For Channel Bed Changes Near Groyne In Experimental Flume
NASA Astrophysics Data System (ADS)
Ho, J.; Kim, W.; Choi, J.; Ahn, W.
2007-12-01
Numerical modeling for groynes in a rectangular section flume was developed to predict channel bed changes and to investigate the best performing groyne installation interval. Five different porous groynes were simulated in this study to evaluate hydraulic influences on the maximum scour depth induced by the groyne. Channel surface elevation and velocity changes near groyne were measured using surface topology digital imaging system and three-dimensional acoustic doppler velocimeter. Three-dimensional solutions governed by the Reynolds averaged Navier-Stokes and continuity equations were calculated using a commercial computational fluid dynamics code, which uses the finite volume method. For considering turbulent open channel flow in this computations, k-e model and Reynolds normalization group model were employed. Permeability of the groyne was reproduced by changing the gap between 2 cm diameter of cylinders. The approach water depths and the approach velocity acquired from the physical model were treated as the boundary conditions for the numerical model. Positive velocity boundary and the continuative boundary, which consists of zero normal derivatives at the boundary for a smooth continuation of the flow through the boundary, were set for inflow and outflow of the domain (x-direction). Atmospheric pressure boundary and no-slip wall condition were assigned at the top and the bottom of the domain (y-direction), respectively. Computed maximum scour and deposition depth and channel bed changes near the groyne were compared with the physical model measurements for validation of the numerical model. Calibration statistics of a mean error and a normalized root mean squared value was provided with 95% confidence interval plot. The numerical model computations showed very positive agreement with the physical model measurements. The relationship between the maximum scour depth and groyne porosity was generated. It was found that the numerical model could complement the
Numerical prediction of hydrodynamic forces on a ship passing through a lock
NASA Astrophysics Data System (ADS)
Wang, Hong-zhi; Zou, Zao-jian
2014-06-01
While passing through a lock, a ship usually undergoes a steady forward motion at low speed. Owing to the size restriction of lock chamber, the shallow water and bank effects on the hydrodynamic forces acting on the ship may be remarkable, which may have an adverse effect on navigation safety. However, the complicated hydrodynamics is not yet fully understood. This paper focuses on the hydrodynamic forces acting on a ship passing through a lock. The unsteady viscous flow and hydrodynamic forces are calculated by applying an unsteady RANS code with a RNG k- ɛ turbulence model. User-defined function (UDF) is compiled to define the ship motion. Meanwhile, the grid regeneration is dealt with by using the dynamic mesh method and sliding interface technique. Numerical study is carried out for a bulk carrier ship passing through the Pierre Vandamme Lock in Zeebrugge at the model scale. The proposed method is validated by comparing the numerical results with the data of captive model tests. By analyzing the numerical results obtained at different speeds, water depths and eccentricities, the influences of speed, water depth and eccentricity on the hydrodynamic forces are illustrated. The numerical method proposed in this paper can qualitatively predict the ship-lock hydrodynamic interaction. It can provide certain guidance on the manoeuvring and control of ships passing through a lock.
Numerical Prediction of Chevron Nozzle Noise Reduction using Wind-MGBK Methodology
NASA Technical Reports Server (NTRS)
Engblom, W.A.; Bridges, J.; Khavarant, A.
2005-01-01
Numerical predictions for single-stream chevron nozzle flow performance and farfield noise production are presented. Reynolds Averaged Navier Stokes (RANS) solutions, produced via the WIND flow solver, are provided as input to the MGBK code for prediction of farfield noise distributions. This methodology is applied to a set of sensitivity cases involving varying degrees of chevron inward bend angle relative to the core flow, for both cold and hot exhaust conditions. The sensitivity study results illustrate the effect of increased chevron bend angle and exhaust temperature on enhancement of fine-scale mixing, initiation of core breakdown, nozzle performance, and noise reduction. Direct comparisons with experimental data, including stagnation pressure and temperature rake data, PIV turbulent kinetic energy fields, and 90 degree observer farfield microphone data are provided. Although some deficiencies in the numerical predictions are evident, the correct farfield noise spectra trends are captured by the WIND-MGBK method, including the noise reduction benefit of chevrons. Implications of these results to future chevron design efforts are addressed.
Comparison of numerical models for predicting ground water rebound in abandoned deep mine systems
NASA Astrophysics Data System (ADS)
Choi, Y.; Baek, H.; Kim, D.
2012-12-01
Cessation of dewatering usually results in ground water rebound after closing a deep underground mine because the mind voids and surrounding strata flood up to the levels of decant points such as shafts and drifts. Several numerical models have been developed to predict the timing, magnitude and location of discharges resulting from ground water rebound. We compared the numerical models such as VSS-NET, GRAM and MODFLOW codes at different spatial and time scales. Based on the comparisons, a new strategy is established to develop a program for ground water rebound modeling in abandoned deep mine systems. This presentation describes the new strategy and its application to an abandoned underground mine in Korea.
Validation with experiments on simplified numerical prediction of hybrid rocket internal ballistics
NASA Astrophysics Data System (ADS)
Funami, Yuki; Shimada, Toru
2012-11-01
In order to design hybrid rocket engines, we have developed a numerical prediction approach to the internal ballistics. The key point is its cost performance. Therefore simple but efficient models are required. Fluid phenomenon and thermal conduction phenomenon in a solid fuel should be treated time-dependently, because characteristic times of these phenomena are longer than those of other phenomena. Besides, they are solved with the energy-flux balance equation at the solid fuel surface to determine the regression rate. It is confirmed that numerical evaluation of time- and space-averaged regression rate is the same order of magnitude as that in experiments. However, the factors n in ṙ¯ = aG¯oxn differ between calculations and experiments.
Dragna, Didier; Blanc-Benon, Philippe; Poisson, Franck
2014-03-01
Results from outdoor acoustic measurements performed in a railway site near Reims in France in May 2010 are compared to those obtained from a finite-difference time-domain solver of the linearized Euler equations. During the experiments, the ground profile and the different ground surface impedances were determined. Meteorological measurements were also performed to deduce mean vertical profiles of wind and temperature. An alarm pistol was used as a source of impulse signals and three microphones were located along a propagation path. The various measured parameters are introduced as input data into the numerical solver. In the frequency domain, the numerical results are in good accordance with the measurements up to a frequency of 2 kHz. In the time domain, except a time shift, the predicted waveforms match the measured waveforms with a close agreement. PMID:24606253
NASA Astrophysics Data System (ADS)
Guo, Bingjie; Bitner-Gregersen, Elzbieta Maria; Sun, Hui; Block Helmers, Jens
2013-04-01
Earlier investigations have indicated that proper prediction of nonlinear loads and responses due to nonlinear waves is important for ship safety in extreme seas. However, the nonlinear loads and responses in extreme seas have not been sufficiently investigated yet, particularly when rogue waves are considered. A question remains whether the existing linear codes can predict nonlinear loads and responses with a satisfactory accuracy and how large the deviations from linear predictions are. To indicate it response statistics have been studied based on the model tests carried out with a LNG tanker in the towing tank of the Technical University of Berlin (TUB), and compared with the statistics derived from numerical simulations using the DNV code WASIM. It is a potential code for wave-ship interaction based on 3D Panel method, which can perform both linear and nonlinear simulation. The numerical simulations with WASIM and the model tests in extreme and rogue waves have been performed. The analysis of ship motions (heave and pitch) and bending moments, in both regular and irregular waves, is performed. The results from the linear and nonlinear simulations are compared with experimental data to indicate the impact of wave non-linearity on loads and response calculations when the code based on the Rankine Panel Method is used. The study shows that nonlinearities may have significant effect on extreme motions and bending moment generated by strongly nonlinear waves. The effect of water depth on ship responses is also demonstrated using numerical simulations. Uncertainties related to the results are discussed, giving particular attention to sampling variability.
Pollastri, Gianluca; Martin, Alberto JM; Mooney, Catherine; Vullo, Alessandro
2007-01-01
Background Structural properties of proteins such as secondary structure and solvent accessibility contribute to three-dimensional structure prediction, not only in the ab initio case but also when homology information to known structures is available. Structural properties are also routinely used in protein analysis even when homology is available, largely because homology modelling is lower throughput than, say, secondary structure prediction. Nonetheless, predictors of secondary structure and solvent accessibility are virtually always ab initio. Results Here we develop high-throughput machine learning systems for the prediction of protein secondary structure and solvent accessibility that exploit homology to proteins of known structure, where available, in the form of simple structural frequency profiles extracted from sets of PDB templates. We compare these systems to their state-of-the-art ab initio counterparts, and with a number of baselines in which secondary structures and solvent accessibilities are extracted directly from the templates. We show that structural information from templates greatly improves secondary structure and solvent accessibility prediction quality, and that, on average, the systems significantly enrich the information contained in the templates. For sequence similarity exceeding 30%, secondary structure prediction quality is approximately 90%, close to its theoretical maximum, and 2-class solvent accessibility roughly 85%. Gains are robust with respect to template selection noise, and significant for marginal sequence similarity and for short alignments, supporting the claim that these improved predictions may prove beneficial beyond the case in which clear homology is available. Conclusion The predictive system are publicly available at the address . PMID:17570843
NASA Technical Reports Server (NTRS)
Tuccillo, J. J.
1984-01-01
Numerical Weather Prediction (NWP), for both operational and research purposes, requires only fast computational speed but also large memory. A technique for solving the Primitive Equations for atmospheric motion on the CYBER 205, as implemented in the Mesoscale Atmospheric Simulation System, which is fully vectorized and requires substantially less memory than other techniques such as the Leapfrog or Adams-Bashforth Schemes is discussed. The technique presented uses the Euler-Backard time marching scheme. Also discussed are several techniques for reducing computational time of the model by replacing slow intrinsic routines by faster algorithms which use only hardware vector instructions.
Simulation studies of the impact of advanced observing systems on numerical weather prediction
NASA Technical Reports Server (NTRS)
Atlas, R.; Kalnay, E.; Susskind, J.; Reuter, D.; Baker, W. E.; Halem, M.
1984-01-01
To study the potential impact of advanced passive sounders and lidar temperature, pressure, humidity, and wind observing systems on large-scale numerical weather prediction, a series of realistic simulation studies between the European Center for medium-range weather forecasts, the National Meteorological Center, and the Goddard Laboratory for Atmospheric Sciences is conducted. The project attempts to avoid the unrealistic character of earlier simulation studies. The previous simulation studies and real-data impact tests are reviewed and the design of the current simulation system is described. Consideration is given to the simulation of observations of space-based sounding systems.
NASA Technical Reports Server (NTRS)
Davies, H. C.; Turner, R. E.
1977-01-01
A dynamical relaxation technique for updating prediction models is analyzed with the help of the linear and nonlinear barotropic primitive equations. It is assumed that a complete four-dimensional time history of some prescribed subset of the meteorological variables is known. The rate of adaptation of the flow variables toward the true state is determined for a linearized f-model, and for mid-latitude and equatorial beta-plane models. The results of the analysis are corroborated by numerical experiments with the nonlinear shallow-water equations.
A lateral boundary formulation for multi-level prediction models. [numerical weather forecasting
NASA Technical Reports Server (NTRS)
Davies, H. C.
1976-01-01
A method is proposed for treating the lateral boundaries of a limited-area weather prediction model. The method involves the relaxation of the interior flow in the vicinity of the boundary to the external fully prescribed flow. Analytical and numerical results obtained with a linearized multilevel model confirm the effectiveness of this computationally effective method. The method is shown to give an adequate representation of outgoing gravity waves with and without an ambient shear flow and to allow the substantially undistorted transmission of geostrophically balanced flow out of the interior of the limited domain.
Numerical prediction of energy consumption in buildings with controlled interior temperature
Jarošová, P.; Št’astník, S.
2015-03-10
New European directives bring strong requirement to the energy consumption of building objects, supporting the renewable energy sources. Whereas in the case of family and similar houses this can lead up to absurd consequences, for building objects with controlled interior temperature the optimization of energy demand is really needed. The paper demonstrates the system approach to the modelling of thermal insulation and accumulation abilities of such objetcs, incorporating the significant influence of additional physical processes, as surface heat radiation and moisture-driven deterioration of insulation layers. An illustrative example shows the numerical prediction of energy consumption of a freezing plant in one Central European climatic year.
NASA Technical Reports Server (NTRS)
Wahba, Grace; Deepak, A. (Editor)
1988-01-01
The problem of merging direct and remotely sensed (indirect) data with forecast data to get an estimate of the present state of the atmosphere for the purpose of numerical weather prediction is examined. To carry out this merging optimally, it is necessary to provide an estimate of the relative weights to be given to the observations and forecast. It is possible to do this dynamically from the information to be merged, if the correlation structure of the errors from the various sources is sufficiently different. Some new statistical approaches to doing this are described, and conditions quantified in which such estimates are likely to be good.
NASA Astrophysics Data System (ADS)
Coulier, P.; Lombaert, G.; Degrande, G.
2014-06-01
The numerical prediction of vibrations in buildings due to railway traffic is a complicated problem where wave propagation in the soil couples the source (railway tunnel or track) and the receiver (building). This through-soil coupling is often neglected in state-of-the-art numerical models in order to reduce the computational cost. In this paper, the effect of this simplifying assumption on the accuracy of numerical predictions is investigated. A coupled finite element-boundary element methodology is employed to analyze the interaction between a building and a railway tunnel at depth or a ballasted track at the surface of a homogeneous halfspace, respectively. Three different soil types are considered. It is demonstrated that the dynamic axle loads can be calculated with reasonable accuracy using an uncoupled strategy in which through-soil coupling is disregarded. If the transfer functions from source to receiver are considered, however, large local variations in terms of vibration insertion gain are induced by source-receiver interaction, reaching up to 10 dB and higher, although the overall wave field is only moderately affected. A global quantification of the significance of through-soil coupling is made, based on the mean vibrational energy entering a building. This approach allows assessing the common assumption in seismic engineering that source-receiver interaction can be neglected if the distance between source and receiver is sufficiently large compared to the wavelength of waves in the soil. It is observed that the interaction between a source at depth and a receiver mainly affects the power flow distribution if the distance between source and receiver is smaller than the dilatational wavelength in the soil. Interaction effects for a railway track at grade are observed if the source-receiver distance is smaller than six Rayleigh wavelengths. A similar trend is revealed if the passage of a freight train is considered. The overall influence of dynamic
NASA Astrophysics Data System (ADS)
Tirupathi, S.; Schiemenz, A. R.; Liang, Y.; Parmentier, E.; Hesthaven, J.
2013-12-01
The style and mode of melt migration in the mantle are important to the interpretation of basalts erupted on the surface. Both grain-scale diffuse porous flow and channelized melt migration have been proposed. To better understand the mechanisms and consequences of melt migration in a heterogeneous mantle, we have undertaken a numerical study of reactive dissolution in an upwelling and viscously deformable mantle where solubility of pyroxene increases upwards. Our setup is similar to that described in [1], except we use a larger domain size in 2D and 3D and a new numerical method. To enable efficient simulations in 3D through parallel computing, we developed a high-order accurate numerical method for the magma dynamics problem using discontinuous Galerkin methods and constructed the problem using the numerical library deal.II [2]. Linear stability analyses of the reactive dissolution problem reveal three dynamically distinct regimes [3] and the simulations reported in this study were run in the stable regime and the unstable wave regime where small perturbations in porosity grows periodically. The wave regime is more relevant to melt migration beneath the mid-ocean ridges but computationally more challenging. Extending the 2D simulations in the stable regime in [1] to 3D using various combinations of sustained perturbations in porosity at the base of the upwelling column (which may result from a viened mantle), we show the geometry and distribution of dunite channel and high-porosity melt channels are highly correlated with inflow perturbation through superposition. Strong nonlinear interactions among compaction, dissolution, and upwelling give rise to porosity waves and high-porosity melt channels in the wave regime. These compaction-dissolution waves have well organized but time-dependent structures in the lower part of the simulation domain. High-porosity melt channels nucleate along nodal lines of the porosity waves, growing downwards. The wavelength scales
Li, Liqi; Cui, Xiang; Yu, Sanjiu; Zhang, Yuan; Luo, Zhong; Yang, Hua; Zhou, Yue; Zheng, Xiaoqi
2014-01-01
Protein structure prediction is critical to functional annotation of the massively accumulated biological sequences, which prompts an imperative need for the development of high-throughput technologies. As a first and key step in protein structure prediction, protein structural class prediction becomes an increasingly challenging task. Amongst most homological-based approaches, the accuracies of protein structural class prediction are sufficiently high for high similarity datasets, but still far from being satisfactory for low similarity datasets, i.e., below 40% in pairwise sequence similarity. Therefore, we present a novel method for accurate and reliable protein structural class prediction for both high and low similarity datasets. This method is based on Support Vector Machine (SVM) in conjunction with integrated features from position-specific score matrix (PSSM), PROFEAT and Gene Ontology (GO). A feature selection approach, SVM-RFE, is also used to rank the integrated feature vectors through recursively removing the feature with the lowest ranking score. The definitive top features selected by SVM-RFE are input into the SVM engines to predict the structural class of a query protein. To validate our method, jackknife tests were applied to seven widely used benchmark datasets, reaching overall accuracies between 84.61% and 99.79%, which are significantly higher than those achieved by state-of-the-art tools. These results suggest that our method could serve as an accurate and cost-effective alternative to existing methods in protein structural classification, especially for low similarity datasets. PMID:24675610
Operational numerical weather prediction on a GPU-accelerated cluster supercomputer
NASA Astrophysics Data System (ADS)
Lapillonne, Xavier; Fuhrer, Oliver; Spörri, Pascal; Osuna, Carlos; Walser, André; Arteaga, Andrea; Gysi, Tobias; Rüdisühli, Stefan; Osterried, Katherine; Schulthess, Thomas
2016-04-01
The local area weather prediction model COSMO is used at MeteoSwiss to provide high resolution numerical weather predictions over the Alpine region. In order to benefit from the latest developments in computer technology the model was optimized and adapted to run on Graphical Processing Units (GPUs). Thanks to these model adaptations and the acquisition of a dedicated hybrid supercomputer a new set of operational applications have been introduced, COSMO-1 (1 km deterministic), COSMO-E (2 km ensemble) and KENDA (data assimilation) at MeteoSwiss. These new applications correspond to an increase of a factor 40x in terms of computational load as compared to the previous operational setup. We present an overview of the porting approach of the COSMO model to GPUs together with a detailed description of and performance results on the new hybrid Cray CS-Storm computer, Piz Kesch.
Defect reaction network in Si-doped InAs. Numerical predictions.
Schultz, Peter A.
2015-05-01
This Report characterizes the defects in the def ect reaction network in silicon - doped, n - type InAs predicted with first principles density functional theory. The reaction network is deduced by following exothermic defect reactions starting with the initially mobile interstitial defects reacting with common displacement damage defects in Si - doped InAs , until culminating in immobile reaction p roducts. The defect reactions and reaction energies are tabulated, along with the properties of all the silicon - related defects in the reaction network. This Report serves to extend the results for the properties of intrinsic defects in bulk InAs as colla ted in SAND 2013 - 2477 : Simple intrinsic defects in InAs : Numerical predictions to include Si - containing simple defects likely to be present in a radiation - induced defect reaction sequence . This page intentionally left blank
Numerical prediction of the thermodynamic properties of ternary Al-Ni-Hf alloys
NASA Astrophysics Data System (ADS)
Romanowska, Jolanta; Kotowski, Sławomir; Zagula-Yavorska, Maryana
2014-10-01
Thermodynamic properties of ternary Al-Hf-Ni system, such as exG, μAl, μNi and μZr at 1373K were predicted on the basis of thermodynamic properties of binary systems included in the investigated ternary system. The idea of predicting exG values was regarded as the calculation of excess Gibbs energy values inside a certain area (a Gibbs triangle) unless all boundary conditions, that is values of exG on all legs of the triangle are known. exG and Lijk ternary interaction parameters in the Muggianu extension of the Redlich-Kister formalism are calculated numerically using Wolfram Mathematica 9 software.
NASA Astrophysics Data System (ADS)
Sampath, S.; Ganesan, V.
1986-04-01
A method is offered for measuring turbulence levels in three directions in gas turbine combustion systems and high intensity industrial furnaces, using a hot wire anemometer. A detailed analysis of the turbulence in the flow is necessary to achieve optimum combustion conditions, and until now there has been no established method available for measuring turbulence in swirling and recirculating flows. The merit of the new method is the use of a single-wire probe rather than the X-probe. The method has been used to measure turbulence levels in swirling recirculating flows generated by vane swirlers. From the measured turbulence levels, the kinetic energy of turbulence has been calculated and the results are compared with a well-established numerical prediction method. Mean velocity measurements have also been made using a 3-hole Pitot probe. The agreement between the measured and predicted values is quite satisfactory.
Denlinger, R.P.; Iverson, R.M.
2001-01-01
Numerical solutions of the equations describing flow of variably fluidized Coulomb mixtures predict key features of dry granular avalanches and water-saturated debris flows measured in physical experiments. These features include time-dependent speeds, depths, and widths of flows as well as the geometry of resulting deposits. Threedimensional (3-D) boundary surfaces strongly influence flow dynamics because transverse shearing and cross-stream momentum transport occur where topography obstructs or redirects motion. Consequent energy dissipation can cause local deceleration and deposition, even on steep slopes. Velocities of surge fronts and other discontinuities that develop as flows cross 3-D terrain are predicted accurately by using a Riemann solution algorithm. The algorithm employs a gravity wave speed that accounts for different intensities of lateral stress transfer in regions of extending and compressing flow and in regions with different degrees of fluidization. Field observations and experiments indicate that flows in which fluid plays a significant role typically have high-friction margins with weaker interiors partly fluidized by pore pressure. Interaction of the strong perimeter and weak interior produces relatively steep-sided, flat-topped deposits. To simulate these effects, we compute pore pressure distributions using an advection-diffusion model with enhanced diffusivity near flow margins. Although challenges remain in evaluating pore pressure distributions in diverse geophysical flows, Riemann solutions of the depthaveraged 3-D Coulomb mixture equations provide a powerful tool for interpreting and predicting flow behavior. They provide a means of modeling debris flows, rock avalanches, pyroclastic flows, and related phenomena without invoking and calibrating Theological parameters that have questionable physical significance.
NASA Technical Reports Server (NTRS)
Homicz, G. F.; Moselle, J. R.
1985-01-01
A hybrid numerical procedure is presented for the prediction of the aerodynamic and acoustic performance of advanced turboprops. A hybrid scheme is proposed which in principle leads to a consistent simultaneous prediction of both fields. In the inner flow a finite difference method, the Approximate-Factorization Alternating-Direction-Implicit (ADI) scheme, is used to solve the nonlinear Euler equations. In the outer flow the linearized acoustic equations are solved via a Boundary-Integral Equation (BIE) method. The two solutions are iteratively matched across a fictitious interface in the flow so as to maintain continuity. At convergence the resulting aerodynamic load prediction will automatically satisfy the appropriate free-field boundary conditions at the edge of the finite difference grid, while the acoustic predictions will reflect the back-reaction of the radiated field on the magnitude of the loading source terms, as well as refractive effects in the inner flow. The equations and logic needed to match the two solutions are developed and the computer program implementing the procedure is described. Unfortunately, no converged solutions were obtained, due to unexpectedly large running times. The reasons for this are discussed and several means to alleviate the situation are suggested.
A frequency domain numerical method for airfoil broadband self-noise prediction
NASA Astrophysics Data System (ADS)
Zhou, Qidou; Joseph, Phillip
2007-01-01
This paper describes a numerical approach, based in the frequency domain, for predicting the broadband self-noise radiation due to an airfoil situated in a smooth mean flow. Noise is generated by the interaction between the boundary layer turbulence on the airfoil surface and the airfoil trailing edge. Thin airfoil theory is used to deduce the unsteady blade loading. In this paper, the important difference with much of the previous work dealing with trailing edge noise is that the integration of the surface sources for computation of the radiated sound field is evaluated on the actual airfoil surface rather than in the mean-chord plane. The assumption of flat plate geometry in the calculation of radiation is therefore avoided. Moreover, the solution is valid in both near and far fields and reduces to the analytic solution due to Amiet when the airfoil collapses to a flat plate with large span, and the measurement point is taken to the far field. Predictions of the airfoil broadband self-noise radiation presented here are shown to be in reasonable agreement with the predictions obtained using the Brooks approach, which are based on a comprehensive database of experimental data. Also investigated in this paper is the effect on the broadband noise prediction of relaxing the 'frozen-gust' assumption, whereby the turbulence at each frequency comprises a continuous spectrum of streamwise wavenumber components. It is shown that making the frozen gust assumption yields an under-prediction of the noise spectrum by approximately 2dB compared with that obtained when this assumption is relaxed, with the largest occurring at high frequencies. This paper concludes with a comparison of the broadband noise directivity for a flat-plat, a NACA 0012 and a NACA 0024 airfoil at non-zero angle of attack. Differences of up to 20 dB are predicted, with the largest difference occurring at a radiation angle of zero degrees relative to the airfoil mean centre line.
NASA Technical Reports Server (NTRS)
Gee, Ken; Cummings, Russell M.; Schiff, Lewis B.
1990-01-01
The F3D thin-layer Navier-Stokes code presently used to numerically investigate the three-dimensional separated flow about a prolate spheroid at high incidence analyzes the effect of different turbulence models on the flowfield solution and the characteristics of the predicted flow. The Johnson-King (1984) model is applied in order to evaluate the importance of modeling nonequilibrium effects in predicting flow about a slender body at high incidence; the computations in question are for steady-state, fully turbulent flow. Insight is gained into the effects of turbulence models on flow characteristics, and model effects on the accurate prediction of highly separated and vortical flows about a slender body are demonstrated.
NASA Astrophysics Data System (ADS)
Narayanareddy, V. V.; Chandrasekhar, N.; Vasudevan, M.; Muthukumaran, S.; Vasantharaja, P.
2016-02-01
In the present study, artificial neural network modeling has been employed for predicting welding-induced angular distortions in autogenous butt-welded 304L stainless steel plates. The input data for the neural network have been obtained from a series of three-dimensional finite element simulations of TIG welding for a wide range of plate dimensions. Thermo-elasto-plastic analysis was carried out for 304L stainless steel plates during autogenous TIG welding employing double ellipsoidal heat source. The simulated thermal cycles were validated by measuring thermal cycles using thermocouples at predetermined positions, and the simulated distortion values were validated by measuring distortion using vertical height gauge for three cases. There was a good agreement between the model predictions and the measured values. Then, a multilayer feed-forward back propagation neural network has been developed using the numerically simulated data. Artificial neural network model developed in the present study predicted the angular distortion accurately.
NASA Astrophysics Data System (ADS)
Morgut, M.; Jošt, D.; Nobile, E.; Škerlavaj, A.
2015-12-01
The numerical predictions of cavitating flow around a marine propeller working in non-uniform inflow and an axial turbine are presented. The cavitating flow is modelled using the homogeneous (mixture) model. Time-dependent simulations are performed for the marine propeller case using OpenFOAM. Three calibrated mass transfer models are alternatively used to model the mass transfer rate due to cavitation and the two-equation SST (Shear Stress Transport) turbulence model is employed to close the system of the governing equations. The predictions of the cavitating flow in an axial turbine are carried out with ANSYS-CFX, where only the native mass transfer model with tuned parameters is used. Steady-state simulations are performed in combination with the SST turbulence model, while time-dependent results are obtained with the more advanced SAS (Scale Adaptive Simulation) SST model. The numerical results agree well with the available experimental measurements, and the simulations performed with the three different calibrated mass transfer models are close to each other for the propeller flow. Regarding the axial turbine the effect of the cavitation on the machine efficiency is well reproduced only by the time dependent simulations.
NASA Astrophysics Data System (ADS)
Yuan, K. Y.; Yuan, W.; Ju, J. W.; Yang, J. M.; Kao, W.; Carlson, L.
2013-04-01
As asphalt pavements age and deteriorate, recurring pothole repair failures and propagating alligator cracks in the asphalt pavements have become a serious issue to our daily life and resulted in high repairing costs for pavement and vehicles. To solve this urgent issue, pothole repair materials with superior durability and long service life are needed. In the present work, revolutionary pothole patching materials with high toughness, high fatigue resistance that are reinforced with nano-molecular resins have been developed to enhance their resistance to traffic loads and service life of repaired potholes. In particular, DCPD resin (dicyclopentadiene, C10H12) with a Rhuthinium-based catalyst is employed to develop controlled properties that are compatible with aggregates and asphalt binders. In this paper, a multi-level numerical micromechanics-based model is developed to predict the viscoelastic properties and dynamic moduli of these innovative nano-molecular resin reinforced pothole patching materials. Irregular coarse aggregates in the finite element analysis are modeled as randomly-dispersed multi-layers coated particles. The effective properties of asphalt mastic, which consists of fine aggregates, tar, cured DCPD and air voids are theoretically estimated by the homogenization technique of micromechanics in conjunction with the elastic-viscoelastic correspondence principle. Numerical predictions of homogenized viscoelastic properties and dynamic moduli are demonstrated.
Development of numerical model for predicting heat generation and temperatures in MSW landfills.
Hanson, James L; Yeşiller, Nazli; Onnen, Michael T; Liu, Wei-Lien; Oettle, Nicolas K; Marinos, Janelle A
2013-10-01
A numerical modeling approach has been developed for predicting temperatures in municipal solid waste landfills. Model formulation and details of boundary conditions are described. Model performance was evaluated using field data from a landfill in Michigan, USA. The numerical approach was based on finite element analysis incorporating transient conductive heat transfer. Heat generation functions representing decomposition of wastes were empirically developed and incorporated to the formulation. Thermal properties of materials were determined using experimental testing, field observations, and data reported in literature. The boundary conditions consisted of seasonal temperature cycles at the ground surface and constant temperatures at the far-field boundary. Heat generation functions were developed sequentially using varying degrees of conceptual complexity in modeling. First a step-function was developed to represent initial (aerobic) and residual (anaerobic) conditions. Second, an exponential growth-decay function was established. Third, the function was scaled for temperature dependency. Finally, an energy-expended function was developed to simulate heat generation with waste age as a function of temperature. Results are presented and compared to field data for the temperature-dependent growth-decay functions. The formulations developed can be used for prediction of temperatures within various components of landfill systems (liner, waste mass, cover, and surrounding subgrade), determination of frost depths, and determination of heat gain due to decomposition of wastes. PMID:23664656
NASA Technical Reports Server (NTRS)
Duffy, D. G.
1981-01-01
The split explicit integration scheme for numerical weather prediction models is employed in a version of the regional numerical weather prediction model of the Japan Meteorological Agency. The finite-difference scheme of the model is designed in the manner proposed by Okamura (1975). The horizontal advection terms in the governing equations are integrated with a time step limited by the wind speed while the terms which describe inertial-gravity oscillations are integrated in a succession of shorter time steps. The physical processes included within the model are precipitation, small-scale convection, surface exchanges of sensible and latent heat, and radiative heating and cooling. An example of a surface pressure forecast over Europe is shown for initial data observed at 0000 GMT 29 December 1979. Quantitative precipitation forecasts over Europe and North America for the 24 h period beginning at 0000 GMT 30 December 1979 are also shown. It is concluded that the model is capable of realistically depicting the evolution of synoptic-scale systems.
Numerical Simulation of Screech Tones from Supersonic Jets: Physics and Prediction
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.; Zaman, Khairul Q. (Technical Monitor)
2002-01-01
The objectives of this project are to: (1) perform a numerical simulation of the jet screech phenomenon; and (2) use the data of the simulations to obtain a better understanding of the physics of jet screech. The original grant period was for three years. This was extended at no cost for an extra year to allow the principal investigator time to publish the results. We would like to report that our research work and results (supported by this grant) have fulfilled both objectives of the grant. The following is a summary of the important accomplishments: (1) We have now demonstrated that it is possible to perform accurate numerical simulations of the jet screech phenomenon. Both the axisymmetric case and the fully three-dimensional case were carried out successfully. It is worthwhile to note that this is the first time the screech tone phenomenon has been successfully simulated numerically; (2) All four screech modes were reproduced in the simulation. The computed screech frequencies and intensities were in good agreement with the NASA Langley Research Center data; (3) The staging phenomenon was reproduced in the simulation; (4) The effects of nozzle lip thickness and jet temperature were studied. Simulated tone frequencies at various nozzle lip thickness and jet temperature were found to agree well with experiments; (5) The simulated data were used to explain, for the first time, why there are two axisymmetric screech modes and two helical/flapping screech modes; (6) The simulated data were used to show that when two tones are observed, they co-exist rather than switching from one mode to the other, back and forth, as some previous investigators have suggested; and (7) Some resources of the grant were used to support the development of new computational aeroacoustics (CAA) methodology. (Our screech tone simulations have benefited because of the availability of these improved methods.)
NASA Astrophysics Data System (ADS)
Figueroa, Aldo; Meunier, Patrice; Cuevas, Sergio; Villermaux, Emmanuel; Ramos, Eduardo
2014-01-01
We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, "The diffusive strip method for scalar mixing in two-dimensions," J. Fluid Mech. 662, 134-172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.
NASA Technical Reports Server (NTRS)
Thomas, P. D.
1979-01-01
The theoretical foundation and formulation of a numerical method for predicting the viscous flowfield in and about isolated three dimensional nozzles of geometrically complex configuration are presented. High Reynolds number turbulent flows are of primary interest for any combination of subsonic, transonic, and supersonic flow conditions inside or outside the nozzle. An alternating-direction implicit (ADI) numerical technique is employed to integrate the unsteady Navier-Stokes equations until an asymptotic steady-state solution is reached. Boundary conditions are computed with an implicit technique compatible with the ADI technique employed at interior points of the flow region. The equations are formulated and solved in a boundary-conforming curvilinear coordinate system. The curvilinear coordinate system and computational grid is generated numerically as the solution to an elliptic boundary value problem. A method is developed that automatically adjusts the elliptic system so that the interior grid spacing is controlled directly by the a priori selection of the grid spacing on the boundaries of the flow region.
Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel; Cuevas, Sergio; Ramos, Eduardo
2014-01-15
We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.
Wedi, Nils P
2014-06-28
The steady path of doubling the global horizontal resolution approximately every 8 years in numerical weather prediction (NWP) at the European Centre for Medium Range Weather Forecasts may be substantially altered with emerging novel computing architectures. It coincides with the need to appropriately address and determine forecast uncertainty with increasing resolution, in particular, when convective-scale motions start to be resolved. Blunt increases in the model resolution will quickly become unaffordable and may not lead to improved NWP forecasts. Consequently, there is a need to accordingly adjust proven numerical techniques. An informed decision on the modelling strategy for harnessing exascale, massively parallel computing power thus also requires a deeper understanding of the sensitivity to uncertainty--for each part of the model--and ultimately a deeper understanding of multi-scale interactions in the atmosphere and their numerical realization in ultra-high-resolution NWP and climate simulations. This paper explores opportunities for substantial increases in the forecast efficiency by judicious adjustment of the formal accuracy or relative resolution in the spectral and physical space. One path is to reduce the formal accuracy by which the spectral transforms are computed. The other pathway explores the importance of the ratio used for the horizontal resolution in gridpoint space versus wavenumbers in spectral space. This is relevant for both high-resolution simulations as well as ensemble-based uncertainty estimation. PMID:24842035
Temperature Fields in Soft Tissue during LPUS Treatment: Numerical Prediction and Experiment Results
Kujawska, Tamara; Wojcik, Janusz; Nowicki, Andrzej
2010-03-09
Recent research has shown that beneficial therapeutic effects in soft tissues can be induced by the low power ultrasound (LPUS). For example, increasing of cells immunity to stress (among others thermal stress) can be obtained through the enhanced heat shock proteins (Hsp) expression induced by the low intensity ultrasound. The possibility to control the Hsp expression enhancement in soft tissues in vivo stimulated by ultrasound can be the potential new therapeutic approach to the neurodegenerative diseases which utilizes the known feature of cells to increase their immunity to stresses through the Hsp expression enhancement. The controlling of the Hsp expression enhancement by adjusting of exposure level to ultrasound energy would allow to evaluate and optimize the ultrasound-mediated treatment efficiency. Ultrasonic regimes are controlled by adjusting the pulsed ultrasound waves intensity, frequency, duration, duty cycle and exposure time. Our objective was to develop the numerical model capable of predicting in space and time temperature fields induced by a circular focused transducer generating tone bursts in multilayer nonlinear attenuating media and to compare the numerically calculated results with the experimental data in vitro. The acoustic pressure field in multilayer biological media was calculated using our original numerical solver. For prediction of temperature fields the Pennes' bio-heat transfer equation was employed. Temperature field measurements in vitro were carried out in a fresh rat liver using the 15 mm diameter, 25 mm focal length and 2 MHz central frequency transducer generating tone bursts with the spatial peak temporal average acoustic intensity varied between 0.325 and 1.95 W/cm{sup 2}, duration varied from 20 to 500 cycles at the same 20% duty cycle and the exposure time varied up to 20 minutes. The measurement data were compared with numerical simulation results obtained under experimental boundary conditions. Good agreement between
NASA Astrophysics Data System (ADS)
Smith, Grant D.; Jaffe, Richard L.; Yoon, Do. Y.
1998-06-01
High-level ab initio quantum chemistry calculations are shown to predict conformer populations of 1,2-dimethoxypropane and 5-methoxy-1,3-dioxane that are consistent with gas-phase NMR vicinal coupling constant measurements. The conformational energies of the cyclic ether 5-methoxy-1,3-dioxane are found to be consistent with those predicted by a rotational isomeric state (RIS) model based upon the acyclic analog 1,2-dimethoxypropane. The quantum chemistry and RIS calculations indicate the presence of strong attractive 1,5 C(H 3)⋯O electrostatic interactions in these molecules, similar to those found in 1,2-dimethoxyethane.
NASA Astrophysics Data System (ADS)
Williams, Kevin Vaughan
Rapid growth in use of composite materials in structural applications drives the need for a more detailed understanding of damage tolerant and damage resistant design. Current analytical techniques provide sufficient understanding and predictive capabilities for application in preliminary design, but current numerical models applicable to composites are few and far between and their development into well tested, rigorous material models is currently one of the most challenging fields in composite materials. The present work focuses on the development, implementation, and verification of a plane-stress continuum damage mechanics based model for composite materials. A physical treatment of damage growth based on the extensive body of experimental literature on the subject is combined with the mathematical rigour of a continuum damage mechanics description to form the foundation of the model. The model has been implemented in the LS-DYNA3D commercial finite element hydrocode and the results of the application of the model are shown to be physically meaningful and accurate. Furthermore it is demonstrated that the material characterization parameters can be extracted from the results of standard test methodologies for which a large body of published data already exists for many materials. Two case studies are undertaken to verify the model by comparison with measured experimental data. The first series of analyses demonstrate the ability of the model to predict the extent and growth of damage in T800/3900-2 carbon fibre reinforced polymer (CFRP) plates subjected to normal impacts over a range of impact energy levels. The predicted force-time and force-displacement response of the panels compare well with experimental measurements. The damage growth and stiffness reduction properties of the T800/3900-2 CFRP are derived using published data from a variety of sources without the need for parametric studies. To further demonstrate the physical nature of the model, a IM6
Conjeevaram Selvakumar, Praveen Kumar; Maksimak, Brian; Hanouneh, Ibrahim; Youssef, Dalia H; Lopez, Rocio; Alkhouri, Naim
2016-09-01
SOFT and BAR scores utilize recipient, donor, and graft factors to predict the 3-month survival after LT in adults (≥18 years). Recently, Pedi-SOFT score was developed to predict 3-month survival after LT in young children (≤12 years). These scoring systems have not been studied in adolescent patients (13-17 years). We evaluated the accuracy of these scoring systems in predicting the 3-month post-LT survival in adolescents through a retrospective analysis of data from UNOS of patients aged 13-17 years who received LT between 03/01/2002 and 12/31/2012. Recipients of combined organ transplants, donation after cardiac death, or living donor graft were excluded. A total of 711 adolescent LT recipients were included with a mean age of 15.2±1.4 years. A total of 100 patients died post-LT including 33 within 3 months. SOFT, BAR, and Pedi-SOFT scores were all found to be good predictors of 3-month post-transplant survival outcome with areas under the ROC curve of 0.81, 0.80, and 0.81, respectively. All three scores provided good accuracy for predicting 3-month survival post-LT in adolescents and may help clinical decision making to optimize survival rate and organ utilization. PMID:27478012
Issa, Naiem T; Peters, Oakland J; Byers, Stephen W; Dakshanamurthy, Sivanesan
2015-01-01
We describe here RepurposeVS for the reliable prediction of drug-target signatures using X-ray protein crystal structures. RepurposeVS is a virtual screening method that incorporates docking, drug-centric and protein-centric 2D/3D fingerprints with a rigorous mathematical normalization procedure to account for the variability in units and provide high-resolution contextual information for drug-target binding. Validity was confirmed by the following: (1) providing the greatest enrichment of known drug binders for multiple protein targets in virtual screening experiments, (2) determining that similarly shaped protein target pockets are predicted to bind drugs of similar 3D shapes when RepurposeVS is applied to 2,335 human protein targets, and (3) determining true biological associations in vitro for mebendazole (MBZ) across many predicted kinase targets for potential cancer repurposing. Since RepurposeVS is a drug repurposing-focused method, benchmarking was conducted on a set of 3,671 FDA approved and experimental drugs rather than the Database of Useful Decoys (DUDE) so as to streamline downstream repurposing experiments. We further apply RepurposeVS to explore the overall potential drug repurposing space for currently approved drugs. RepurposeVS is not computationally intensive and increases performance accuracy, thus serving as an efficient and powerful in silico tool to predict drug-target associations in drug repurposing. PMID:26234515
Luo, Wei; Nguyen, Thin; Nichols, Melanie; Tran, Truyen; Rana, Santu; Gupta, Sunil; Phung, Dinh; Venkatesh, Svetha; Allender, Steve
2015-01-01
For years, we have relied on population surveys to keep track of regional public health statistics, including the prevalence of non-communicable diseases. Because of the cost and limitations of such surveys, we often do not have the up-to-date data on health outcomes of a region. In this paper, we examined the feasibility of inferring regional health outcomes from socio-demographic data that are widely available and timely updated through national censuses and community surveys. Using data for 50 American states (excluding Washington DC) from 2007 to 2012, we constructed a machine-learning model to predict the prevalence of six non-communicable disease (NCD) outcomes (four NCDs and two major clinical risk factors), based on population socio-demographic characteristics from the American Community Survey. We found that regional prevalence estimates for non-communicable diseases can be reasonably predicted. The predictions were highly correlated with the observed data, in both the states included in the derivation model (median correlation 0.88) and those excluded from the development for use as a completely separated validation sample (median correlation 0.85), demonstrating that the model had sufficient external validity to make good predictions, based on demographics alone, for areas not included in the model development. This highlights both the utility of this sophisticated approach to model development, and the vital importance of simple socio-demographic characteristics as both indicators and determinants of chronic disease. PMID:25938675
A Maximal Graded Exercise Test to Accurately Predict VO2max in 18-65-Year-Old Adults
ERIC Educational Resources Information Center
George, James D.; Bradshaw, Danielle I.; Hyde, Annette; Vehrs, Pat R.; Hager, Ronald L.; Yanowitz, Frank G.
2007-01-01
The purpose of this study was to develop an age-generalized regression model to predict maximal oxygen uptake (VO sub 2 max) based on a maximal treadmill graded exercise test (GXT; George, 1996). Participants (N = 100), ages 18-65 years, reached a maximal level of exertion (mean plus or minus standard deviation [SD]; maximal heart rate [HR sub…
A Case Study of the Impact of AIRS Temperature Retrievals on Numerical Weather Prediction
NASA Technical Reports Server (NTRS)
Reale, O.; Atlas, R.; Jusem, J. C.
2004-01-01
Large errors in numerical weather prediction are often associated with explosive cyclogenesis. Most studes focus on the under-forecasting error, i.e. cases of rapidly developing cyclones which are poorly predicted in numerical models. However, the over-forecasting error (i.e., to predict an explosively developing cyclone which does not occur in reality) is a very common error that severely impacts the forecasting skill of all models and may also present economic costs if associated with operational forecasting. Unnecessary precautions taken by marine activities can result in severe economic loss. Moreover, frequent occurrence of over-forecasting can undermine the reliance on operational weather forecasting. Therefore, it is important to understand and reduce the prdctions of extreme weather associated with explosive cyclones which do not actually develop. In this study we choose a very prominent case of over-forecasting error in the northwestern Pacific. A 960 hPa cyclone develops in less than 24 hour in the 5-day forecast, with a deepening rate of about 30 hPa in one day. The cyclone is not versed in the analyses and is thus a case of severe over-forecasting. By assimilating AIRS data, the error is largely eliminated. By following the propagation of the anomaly that generates the spurious cyclone, it is found that a small mid-tropospheric geopotential height negative anomaly over the northern part of the Indian subcontinent in the initial conditions, propagates westward, is amplified by orography, and generates a very intense jet streak in the subtropical jet stream, with consequent explosive cyclogenesis over the Pacific. The AIRS assimilation eliminates this anomaly that may have been caused by erroneous upper-air data, and represents the jet stream more correctly. The energy associated with the jet is distributed over a much broader area and as a consequence a multiple, but much more moderate cyclogenesis is observed.
NASA Technical Reports Server (NTRS)
Zavodsky, Bradley; Chou, Shih-Hung; Jedlovec, Gary
2012-01-01
Improvements to global and regional numerical weather prediction (NWP) have been demonstrated through assimilation of data from NASA s Atmospheric Infrared Sounder (AIRS). Current operational data assimilation systems use AIRS radiances, but impact on regional forecasts has been much smaller than for global forecasts. Retrieved profiles from AIRS contain much of the information that is contained in the radiances and may be able to reveal reasons for this reduced impact. Assimilating AIRS retrieved profiles in an identical analysis configuration to the radiances, tracking the quantity and quality of the assimilated data in each technique, and examining analysis increments and forecast impact from each data type can yield clues as to the reasons for the reduced impact. By doing this with regional scale models individual synoptic features (and the impact of AIRS on these features) can be more easily tracked. This project examines the assimilation of hyperspectral sounder data used in operational numerical weather prediction by comparing operational techniques used for AIRS radiances and research techniques used for AIRS retrieved profiles. Parallel versions of a configuration of the Weather Research and Forecasting (WRF) model with Gridpoint Statistical Interpolation (GSI) that mimics the analysis methodology, domain, and observational datasets for the regional North American Mesoscale (NAM) model run at the National Centers for Environmental Prediction (NCEP)/Environmental Modeling Center (EMC) are run to examine the impact of each type of AIRS data set. The first configuration will assimilate the AIRS radiance data along with other conventional and satellite data using techniques implemented within the operational system; the second configuration will assimilate AIRS retrieved profiles instead of AIRS radiances in the same manner. Preliminary results of this study will be presented and focus on the analysis impact of the radiances and profiles for selected cases.
A PBL-radiation model for application to regional numerical weather prediction
NASA Technical Reports Server (NTRS)
Chang, Chia-Bo
1989-01-01
Often in the short-range limited-area numerical weather prediction (NWP) of extratropical weather systems the effects of planetary boundary layer (PBL) processes are considered secondarily important. However, it may not be the case for the regional NWP of mesoscale convective systems over the arid and semi-arid highlands of the southwestern and south-central United States in late spring and summer. Over these dry regions, the PBL can grow quite high up into the lower middle troposphere (600 mb) due to very effective solar heating and hence a vigorous air-land thermal interaction can occur. The interaction representing a major heat source for regional dynamical systems can not be ignored. A one-dimensional PBL-radiation model was developed. The model PBL consists of a constant-flux surface layer superposed with a well-mixed (Ekman) layer. The vertical eddy mixing coefficients for heat and momentum in the surface layer are determined according to the surface similarity theory, while their vertical profiles in the Ekman layer are specified with a cubic polynomial. Prognostic equations are used for predicting the height of the nonneutral PBL. The atmospheric radiation is parameterized to define the surface heat source/sink for the growth and decay of the PBL. A series of real-data numerical experiments has been carried out to obtain a physical understanding how the model performs under various atmospheric and surface conditions. This one-dimensional model will eventually be incorporated into a mesoscale prediction system. The ultimate goal of this research is to improve the NWP of mesoscale convective storms over land.
NASA Astrophysics Data System (ADS)
Pau, George Shu Heng; Shen, Chaopeng; Riley, William J.; Liu, Yaning
2016-02-01
The topography, and the biotic and abiotic parameters are typically upscaled to make watershed-scale hydrologic-biogeochemical models computationally tractable. However, upscaling procedure can produce biases when nonlinear interactions between different processes are not fully captured at coarse resolutions. Here we applied the Proper Orthogonal Decomposition Mapping Method (PODMM) to downscale the field solutions from a coarse (7 km) resolution grid to a fine (220 m) resolution grid. PODMM trains a reduced-order model (ROM) with coarse-resolution and fine-resolution solutions, here obtained using PAWS+CLM, a quasi-3-D watershed processes model that has been validated for many temperate watersheds. Subsequent fine-resolution solutions were approximated based only on coarse-resolution solutions and the ROM. The approximation errors were efficiently quantified using an error estimator. By jointly estimating correlated variables and temporally varying the ROM parameters, we further reduced the approximation errors by up to 20%. We also improved the method's robustness by constructing multiple ROMs using different set of variables, and selecting the best approximation based on the error estimator. The ROMs produced accurate downscaling of soil moisture, latent heat flux, and net primary production with O(1000) reduction in computational cost. The subgrid distributions were also nearly indistinguishable from the ones obtained using the fine-resolution model. Compared to coarse-resolution solutions, biases in upscaled ROM solutions were reduced by up to 80%. This method has the potential to help address the long-standing spatial scaling problem in hydrology and enable long-time integration, parameter estimation, and stochastic uncertainty analysis while accurately representing the heterogeneities.
Numerical prediction of transition of the F-16 wing at supersonic speeds
NASA Technical Reports Server (NTRS)
Cummings, Russell M.; Garcia, Joseph A.
1993-01-01
A parametric study is being conducted as an effort to numerically predict the extent of natural laminar flow (NLF) on finite swept wings at supersonic speeds. This study is one aspect of a High Speed Research Program (HSRP) to gain an understanding of the technical requirements for high-speed aircraft flight. The parameters that are being addressed in this study are Reynolds number, angle of attack, and leading-edge wing sweep. These parameters were analyzed through the use of an advanced Computational Fluid Dynamics (CFD) flow solver, specifically the ARC 3-D Compressible Navier-Stokes (CNS) flow solver. From the CNS code, pressure coefficients (Cp) are obtained for the various cases. These Cp's are then used to compute the boundary-layer profiles through the use of the 'Kaups and Cebeci' compressible 2-D boundary layer code. Finally, the boundary-layer parameters are processed into a 3-D compressible boundary layer stability code (COSAL) to predict transition. The parametric study then consisted of four geometries which addressed the effects of sweep, and three angles of attack from zero to ten degrees to yield a total of 12 cases. The above process was substantially automated through a procedure that was developed by the work conducted under this study. This automation procedure then yields a 3-D graphical measure of the extent of laminar flow by predicting the transition location of laminar to turbulent flow.
Development of a 3D numerical methodology for fast prediction of gun blast induced loading
NASA Astrophysics Data System (ADS)
Costa, E.; Lagasco, F.
2014-05-01
In this paper, the development of a methodology based on semi-empirical models from the literature to carry out 3D prediction of pressure loading on surfaces adjacent to a weapon system during firing is presented. This loading is consequent to the impact of the blast wave generated by the projectile exiting the muzzle bore. When exceeding a pressure threshold level, loading is potentially capable to induce unwanted damage to nearby hard structures as well as frangible panels or electronic equipment. The implemented model shows the ability to quickly predict the distribution of the blast wave parameters over three-dimensional complex geometry surfaces when the weapon design and emplacement data as well as propellant and projectile characteristics are available. Considering these capabilities, the use of the proposed methodology is envisaged as desirable in the preliminary design phase of the combat system to predict adverse effects and then enable to identify the most appropriate countermeasures. By providing a preliminary but sensitive estimate of the operative environmental loading, this numerical means represents a good alternative to more powerful, but time consuming advanced computational fluid dynamics tools, which use can, thus, be limited to the final phase of the design.
NASA Astrophysics Data System (ADS)
Dementyeva, S. O.; Ilin, N. V.; Mareev, E. A.
2015-03-01
Modern methods for predicting thunderstorms and lightnings with the use of high-resolution numerical models are considered. An analysis of the Lightning Potential Index (LPI) is performed for various microphysics parameterizations with the use of the Weather Research and Forecasting (WRF) model. The maximum index values are shown to depend significantly on the type of parameterization. This makes it impossible to specify a single threshold LPI for various parameterizations as a criterion for the occurrence of lightning flashes. The topographic LPI maps underestimate the sizes of regions of likely thunderstorm-hazard events. Calculating the electric field under the assumption that ice and graupel are the main charge carriers is considered a new algorithm of lightning prediction. The model shows that the potential difference (between the ground and cloud layer at a given altitude) sufficient to generate a discharge is retained in a larger region than is predicted by the LPI. The main features of the spatial distribution of the electric field and potential agree with observed data.
NASA Astrophysics Data System (ADS)
Cole, S. J.; Moore, R. J.; Roberts, N.
2007-12-01
Forecast rainfall from Numerical Weather Prediction (NWP) and/or nowcasting systems is a major source of uncertainty for short-term flood forecasting. One approach for reducing and estimating this uncertainty is to use high resolution NWP models that should provide better rainfall predictions. The potential benefit of running the Met Office Unified Model (UM) with a grid spacing of 4 and 1 km compared to the current operational resolution of 12 km is assessed using the January 2005 Carlisle flood in northwest England. These NWP rainfall forecasts, and forecasts from the Nimrod nowcasting system, were fed into the lumped Probability Distributed Model (PDM) and the distributed Grid-to-Grid model to predict river flow at the outlets of two catchments important for flood warning. The results show the benefit of increased resolution in the UM, the benefit of coupling the high- resolution rainfall forecasts to hydrological models and the improvement in timeliness of flood warning that might have been possible. Ongoing work aims to employ these NWP rainfall forecasts in ensemble form as part of a procedure for estimating the uncertainty of flood forecasts.
Coyle, Whitney L; Guillemain, Philippe; Kergomard, Jean; Dalmont, Jean-Pierre
2015-11-01
When designing a wind instrument such as a clarinet, it can be useful to be able to predict the playing frequencies. This paper presents an analytical method to deduce these playing frequencies using the input impedance curve. Specifically there are two control parameters that have a significant influence on the playing frequency, the blowing pressure and reed opening. Four effects are known to alter the playing frequency and are examined separately: the flow rate due to the reed motion, the reed dynamics, the inharmonicity of the resonator, and the temperature gradient within the clarinet. The resulting playing frequencies for the first register of a particular professional level clarinet are found using the analytical formulas presented in this paper. The analytical predictions are then compared to numerically simulated results to validate the prediction accuracy. The main conclusion is that in general the playing frequency decreases above the oscillation threshold because of inharmonicity, then increases above the beating reed regime threshold because of the decrease of the flow rate effect. PMID:26627753
NASA Astrophysics Data System (ADS)
Bolot, Rodolphe; Seichepine, Jean-Louis; Qiao, Jiang Hao; Coddet, Christian
2011-01-01
The final target of this study is to achieve a better understanding of the behavior of thermally sprayed abradable seals such as AlSi/polyester composites. These coatings are used as seals between the static and the rotating parts in aero-engines. The machinability of the composite coatings during the friction of the blades depends on their mechanical and thermal effective properties. In order to predict these properties from micrographs, numerical studies were performed with different software packages such as OOF developed by NIST and TS2C developed at the UTBM. In 2008, differences were reported concerning predictions of effective thermal conductivities obtained with the two codes. In this article, a particular attention was paid to the mathematical formulation of the problem. In particular, results obtained with a finite difference method using a cell-centered approach or a nodal formulation allow explaining the discrepancies previously noticed. A comparison of the predictions of the computed effective thermal conductivities is thus proposed. This study is part of the NEWAC project, funded by the European Commission within the 6th RTD Framework programm (FP6).
2014-01-01
Background Locating the protein-coding genes in novel genomes is essential to understanding and exploiting the genomic information but it is still difficult to accurately predict all the genes. The recent availability of detailed information about transcript structure from high-throughput sequencing of messenger RNA (RNA-Seq) delineates many expressed genes and promises increased accuracy in gene prediction. Computational gene predictors have been intensively developed for and tested in well-studied animal genomes. Hundreds of fungal genomes are now or will soon be sequenced. The differences of fungal genomes from animal genomes and the phylogenetic sparsity of well-studied fungi call for gene-prediction tools tailored to them. Results SnowyOwl is a new gene prediction pipeline that uses RNA-Seq data to train and provide hints for the generation of Hidden Markov Model (HMM)-based gene predictions and to evaluate the resulting models. The pipeline has been developed and streamlined by comparing its predictions to manually curated gene models in three fungal genomes and validated against the high-quality gene annotation of Neurospora crassa; SnowyOwl predicted N. crassa genes with 83% sensitivity and 65% specificity. SnowyOwl gains sensitivity by repeatedly running the HMM gene predictor Augustus with varied input parameters and selectivity by choosing the models with best homology to known proteins and best agreement with the RNA-Seq data. Conclusions SnowyOwl efficiently uses RNA-Seq data to produce accurate gene models in both well-studied and novel fungal genomes. The source code for the SnowyOwl pipeline (in Python) and a web interface (in PHP) is freely available from http://sourceforge.net/projects/snowyowl/. PMID:24980894
Michalkova, A; Gorb, L; Hill, F; Leszczynski, J
2011-03-24
This study presents new insight into the prediction of partitioning of organic compounds between a carbon surface (soot) and water, and it also sheds light on the sluggish desorption of interacting molecules from activated and nonactivated carbon surfaces. This paper provides details about the structure and interactions of benzene, polycyclic aromatic hydrocarbons, and aromatic nitrocompounds with a carbon surface modeled by coronene using a density functional theory approach along with the M05-2X functional. The adsorption was studied in vacuum and from water solution. The molecules studied are physisorbed on the carbon surface. While the intermolecular interactions of benzene and hydrocarbons are governed by dispersion forces, nitrocompounds are adsorbed also due to quite strong electrostatic interactions with all types of carbon surfaces. On the basis of these results, we conclude that the method of prediction presented in this study allows one to approach the experimental level of accuracy in predicting thermodynamic parameters of adsorption on a carbon surface from the gas phase. The empirical modification of the polarized continuum model leads also to a quantitative agreement with the experimental data for the Gibbs free energy values of the adsorption from water solution. PMID:21361266
Li, Xiaowei; Liu, Taigang; Tao, Peiying; Wang, Chunhua; Chen, Lanming
2015-12-01
Structural class characterizes the overall folding type of a protein or its domain. Many methods have been proposed to improve the prediction accuracy of protein structural class in recent years, but it is still a challenge for the low-similarity sequences. In this study, we introduce a feature extraction technique based on auto cross covariance (ACC) transformation of position-specific score matrix (PSSM) to represent a protein sequence. Then support vector machine-recursive feature elimination (SVM-RFE) is adopted to select top K features according to their importance and these features are input to a support vector machine (SVM) to conduct the prediction. Performance evaluation of the proposed method is performed using the jackknife test on three low-similarity datasets, i.e., D640, 1189 and 25PDB. By means of this method, the overall accuracies of 97.2%, 96.2%, and 93.3% are achieved on these three datasets, which are higher than those of most existing methods. This suggests that the proposed method could serve as a very cost-effective tool for predicting protein structural class especially for low-similarity datasets. PMID:26460680
Li, Haibo; Ding, Jie; Wen, Ping; Zhang, Qin; Xiang, Jingjing; Li, Qiong; Xuan, Liming; Kong, Lingyin; Mao, Yan; Zhu, Yijun; Shen, Jingjing; Liang, Bo; Li, Hong
2016-01-01
Massively parallel sequencing (MPS) combined with bioinformatic analysis has been widely applied to detect fetal chromosomal aneuploidies such as trisomy 21, 18, 13 and sex chromosome aneuploidies (SCAs) by sequencing cell-free fetal DNA (cffDNA) from maternal plasma, so-called non-invasive prenatal testing (NIPT). However, many technical challenges, such as dependency on correct fetal sex prediction, large variations of chromosome Y measurement and high sensitivity to random reads mapping, may result in higher false negative rate (FNR) and false positive rate (FPR) in fetal sex prediction as well as in SCAs detection. Here, we developed an optimized method to improve the accuracy of the current method by filtering out randomly mapped reads in six specific regions of the Y chromosome. The method reduces the FNR and FPR of fetal sex prediction from nearly 1% to 0.01% and 0.06%, respectively and works robustly under conditions of low fetal DNA concentration (1%) in testing and simulation of 92 samples. The optimized method was further confirmed by large scale testing (1590 samples), suggesting that it is reliable and robust enough for clinical testing. PMID:27441628
Wang, Ting; He, Quanze; Li, Haibo; Ding, Jie; Wen, Ping; Zhang, Qin; Xiang, Jingjing; Li, Qiong; Xuan, Liming; Kong, Lingyin; Mao, Yan; Zhu, Yijun; Shen, Jingjing; Liang, Bo; Li, Hong
2016-01-01
Massively parallel sequencing (MPS) combined with bioinformatic analysis has been widely applied to detect fetal chromosomal aneuploidies such as trisomy 21, 18, 13 and sex chromosome aneuploidies (SCAs) by sequencing cell-free fetal DNA (cffDNA) from maternal plasma, so-called non-invasive prenatal testing (NIPT). However, many technical challenges, such as dependency on correct fetal sex prediction, large variations of chromosome Y measurement and high sensitivity to random reads mapping, may result in higher false negative rate (FNR) and false positive rate (FPR) in fetal sex prediction as well as in SCAs detection. Here, we developed an optimized method to improve the accuracy of the current method by filtering out randomly mapped reads in six specific regions of the Y chromosome. The method reduces the FNR and FPR of fetal sex prediction from nearly 1% to 0.01% and 0.06%, respectively and works robustly under conditions of low fetal DNA concentration (1%) in testing and simulation of 92 samples. The optimized method was further confirmed by large scale testing (1590 samples), suggesting that it is reliable and robust enough for clinical testing. PMID:27441628
NASA Astrophysics Data System (ADS)
Chen, Hanghui; Millis, Andrew J.
2016-05-01
We systematically compare predictions of various exchange correlation functionals for the structural and magnetic properties of perovskite Sr1 -xBaxMnO3 (0 ≤x ≤1 )—a representative class of multiferroic oxides. The local spin density approximation (LSDA) and spin-dependent generalized gradient approximation with Perdew-Burke-Ernzerhof parametrization (sPBE) make substantial different predictions for ferroelectric atomic distortions, tetragonality, and ground state magnetic ordering. Neither approximation quantitatively reproduces all the measured structural and magnetic properties of perovskite Sr0.5Ba0.5MnO3 . The spin-dependent generalized gradient approximation with Perdew-Burke-Ernzerhof revised for solids parametrization (sPBEsol) and the charge-only Perdew-Burke-Ernzerhof parametrized generalized gradient approximation with Hubbard U and Hund's J extensions both provide overall better agreement with measured structural and magnetic properties of Sr0.5Ba0.5MnO3 , compared to LSDA and sPBE. Using these two methods, we find that different from previous predictions, perovskite BaMnO3 has large Mn off-center displacements and is close to a ferromagnetic-to-antiferromagnetic phase boundary, making it a promising candidate to induce effective giant magnetoelectric effects and to achieve cross-field control of polarization and magnetism.
Hashem, Somaya; Esmat, Gamal; Elakel, Wafaa; Habashy, Shahira; Abdel Raouf, Safaa; Darweesh, Samar; Soliman, Mohamad; Elhefnawi, Mohamed; El-Adawy, Mohamed; ElHefnawi, Mahmoud
2016-01-01
Background/Aim. Respectively with the prevalence of chronic hepatitis C in the world, using noninvasive methods as an alternative method in staging chronic liver diseases for avoiding the drawbacks of biopsy is significantly increasing. The aim of this study is to combine the serum biomarkers and clinical information to develop a classification model that can predict advanced liver fibrosis. Methods. 39,567 patients with chronic hepatitis C were included and randomly divided into two separate sets. Liver fibrosis was assessed via METAVIR score; patients were categorized as mild to moderate (F0–F2) or advanced (F3-F4) fibrosis stages. Two models were developed using alternating decision tree algorithm. Model 1 uses six parameters, while model 2 uses four, which are similar to FIB-4 features except alpha-fetoprotein instead of alanine aminotransferase. Sensitivity and receiver operating characteristic curve were performed to evaluate the performance of the proposed models. Results. The best model achieved 86.2% negative predictive value and 0.78 ROC with 84.8% accuracy which is better than FIB-4. Conclusions. The risk of advanced liver fibrosis, due to chronic hepatitis C, could be predicted with high accuracy using decision tree learning algorithm that could be used to reduce the need to assess the liver biopsy. PMID:26880886
Hashem, Somaya; Esmat, Gamal; Elakel, Wafaa; Habashy, Shahira; Abdel Raouf, Safaa; Darweesh, Samar; Soliman, Mohamad; Elhefnawi, Mohamed; El-Adawy, Mohamed; ElHefnawi, Mahmoud
2016-01-01
Background/Aim. Respectively with the prevalence of chronic hepatitis C in the world, using noninvasive methods as an alternative method in staging chronic liver diseases for avoiding the drawbacks of biopsy is significantly increasing. The aim of this study is to combine the serum biomarkers and clinical information to develop a classification model that can predict advanced liver fibrosis. Methods. 39,567 patients with chronic hepatitis C were included and randomly divided into two separate sets. Liver fibrosis was assessed via METAVIR score; patients were categorized as mild to moderate (F0-F2) or advanced (F3-F4) fibrosis stages. Two models were developed using alternating decision tree algorithm. Model 1 uses six parameters, while model 2 uses four, which are similar to FIB-4 features except alpha-fetoprotein instead of alanine aminotransferase. Sensitivity and receiver operating characteristic curve were performed to evaluate the performance of the proposed models. Results. The best model achieved 86.2% negative predictive value and 0.78 ROC with 84.8% accuracy which is better than FIB-4. Conclusions. The risk of advanced liver fibrosis, due to chronic hepatitis C, could be predicted with high accuracy using decision tree learning algorithm that could be used to reduce the need to assess the liver biopsy. PMID:26880886
Kong, Liang; Zhang, Lichao; Lv, Jinfeng
2014-03-01
Extracting good representation from protein sequence is fundamental for protein structural classes prediction tasks. In this paper, we propose a novel and powerful method to predict protein structural classes based on the predicted secondary structure information. At the feature extraction stage, a 13-dimensional feature vector is extracted to characterize general contents and spatial arrangements of the secondary structural elements of a given protein sequence. Specially, four segment-level features are designed to elevate discriminative ability for proteins from the α/β and α+β classes. After the features are extracted, a multi-class non-linear support vector machine classifier is used to implement protein structural classes prediction. We report extensive experiments comparing the proposed method to the state-of-the-art in protein structural classes prediction on three widely used low-similarity benchmark datasets: FC699, 1189 and 640. Our method achieves competitive performance on prediction accuracies, especially for the overall prediction accuracies which have exceeded the best reported results on all of the three datasets. PMID:24316044
Cloud-Based Numerical Weather Prediction for Near Real-Time Forecasting and Disaster Response
NASA Technical Reports Server (NTRS)
Molthan, Andrew; Case, Jonathan; Venners, Jason; Schroeder, Richard; Checchi, Milton; Zavodsky, Bradley; Limaye, Ashutosh; O'Brien, Raymond
2015-01-01
activities in environmental monitoring and prediction across a growing number of regional hubs throughout the world. Capacity-building applications that extend numerical weather prediction to developing countries are intended to provide near real-time applications to benefit public health, safety, and economic interests, but may have a greater impact during disaster events by providing a source for local predictions of weather-related hazards, or impacts that local weather events may have during the recovery phase.
Numerical Weather Prediction Models on Linux Boxes as tools in meteorological education in Hungary
NASA Astrophysics Data System (ADS)
Gyongyosi, A. Z.; Andre, K.; Salavec, P.; Horanyi, A.; Szepszo, G.; Mille, M.; Tasnadi, P.; Weidiger, T.
2012-04-01
Education of Meteorologist in Hungary - according to the Bologna Process - has three stages: BSc, MSc and PhD, and students graduating at each stage get the respective degree (BSc, MSc and PhD). The three year long base BSc course in Meteorology can be chosen by undergraduate students in the fields of Geosciences, Environmental Sciences and Physics. BasicsFundamentals in Mathematics (Calculus), Physics (General and Theoretical) Physics and Informatics are emphasized during their elementary education. The two year long MSc course - in which about 15 to 25 students are admitted each year - can be studied only at our the Eötvös Loránd uUniversity in the our country. Our aim is to give a basic education in all fields of Meteorology. Main topics are: Climatology, Atmospheric Physics, Atmospheric Chemistry, Dynamic and Synoptic Meteorology, Numerical Weather Prediction, modeling Modeling of surfaceSurface-atmosphere Iinteractions and Cclimate change. Education is performed in two branches: Climate Researcher and Forecaster. Education of Meteorologist in Hungary - according to the Bologna Process - has three stages: BSc, MSc and PhD, and students graduating at each stage get the respective degree. The three year long BSc course in Meteorology can be chosen by undergraduate students in the fields of Geosciences, Environmental Sciences and Physics. Fundamentals in Mathematics (Calculus), (General and Theoretical) Physics and Informatics are emphasized during their elementary education. The two year long MSc course - in which about 15 to 25 students are admitted each year - can be studied only at the Eötvös Loránd University in our country. Our aim is to give a basic education in all fields of Meteorology: Climatology, Atmospheric Physics, Atmospheric Chemistry, Dynamic and Synoptic Meteorology, Numerical Weather Prediction, Modeling of Surface-atmosphere Interactions and Climate change. Education is performed in two branches: Climate Researcher and Forecaster
Zhu, Hongjun; Feng, Guang; Wang, Qijun
2014-01-01
Accurate prediction of erosion thickness is essential for pipe engineering. The objective of the present paper is to study the temperature distribution in an eroded bend pipe and find a new method to predict the erosion reduced thickness. Computational fluid dynamic (CFD) simulations with FLUENT software are carried out to investigate the temperature field. And effects of oil inlet rate, oil inlet temperature, and erosion reduced thickness are examined. The presence of erosion pit brings about the obvious fluctuation of temperature drop along the extrados of bend. And the minimum temperature drop presents at the most severe erosion point. Small inlet temperature or large inlet velocity can lead to small temperature drop, while shallow erosion pit causes great temperature drop. The dimensionless minimum temperature drop is analyzed and the fitting formula is obtained. Using the formula we can calculate the erosion reduced thickness, which is only needed to monitor the outer surface temperature of bend pipe. This new method can provide useful guidance for pipeline monitoring and replacement. PMID:24719576
Zhu, Hongjun; Feng, Guang; Wang, Qijun
2014-01-01
Accurate prediction of erosion thickness is essential for pipe engineering. The objective of the present paper is to study the temperature distribution in an eroded bend pipe and find a new method to predict the erosion reduced thickness. Computational fluid dynamic (CFD) simulations with FLUENT software are carried out to investigate the temperature field. And effects of oil inlet rate, oil inlet temperature, and erosion reduced thickness are examined. The presence of erosion pit brings about the obvious fluctuation of temperature drop along the extrados of bend. And the minimum temperature drop presents at the most severe erosion point. Small inlet temperature or large inlet velocity can lead to small temperature drop, while shallow erosion pit causes great temperature drop. The dimensionless minimum temperature drop is analyzed and the fitting formula is obtained. Using the formula we can calculate the erosion reduced thickness, which is only needed to monitor the outer surface temperature of bend pipe. This new method can provide useful guidance for pipeline monitoring and replacement. PMID:24719576
Huang, Gui-Qian; Zhu, Gui-Qi; Liu, Yan-Long; Wang, Li-Ren; Braddock, Martin; Zheng, Ming-Hua; Zhou, Meng-Tao
2016-01-01
Objectives Neutrophil lymphocyte ratio (NLR) has been shown to predict prognosis of cancers in several studies. This study was designed to evaluate the impact of stratified NLR in patients who have received curative liver resection (CLR) for hepatocellular carcinoma (HCC). Methods A total of 1659 patients who underwent CLR for suspected HCC between 2007 and 2014 were reviewed. The preoperative NLR was categorized into quartiles based on the quantity of the study population and the distribution of NLR. Hazard ratios (HRs) and 95% confidence intervals (CIs) were significantly associated with overall survival (OS) and derived by Cox proportional hazard regression analyses. Univariate and multivariate Cox proportional hazard regression analyses were evaluated for association of all independent parameters with disease prognosis. Results Multivariable Cox proportional hazards models showed that the level of NLR (HR = 1.031, 95%CI: 1.002-1.060, P = 0.033), number of nodules (HR = 1.679, 95%CI: 1.285-2.194, P<0.001), portal vein thrombosis (HR = 4.329, 95%CI: 1.968-9.521, P<0.001), microvascular invasion (HR = 2.527, 95%CI: 1.726-3.700, P<0.001) and CTP score (HR = 1.675, 95%CI: 1.153-2.433, P = 0.007) were significant predictors of mortality. From the Kaplan-Meier analysis of overall survival (OS), each NLR quartile showed a progressively worse OS and apparent separation (log-rank P=0.008). The highest 5-year OS rate following CLR (60%) in HCC patients was observed in quartile 1. In contrast, the lowest 5-year OS rate (27%) was obtained in quartile 4. Conclusions Stratified NLR may predict significantly improved outcomes and strengthen the predictive power for patient responses to therapeutic intervention. PMID:26716411
NASA Astrophysics Data System (ADS)
Yusman, W.; Viridi, S.; Rachmat, S.
2016-01-01
The non-discharges geothermal wells have been a main problem in geothermal development stages and well discharge stimulation is required to initiate a flow. Air compress stimulation is one of the methods to trigger a fluid flow from the geothermal reservoir. The result of this process can be predicted by using by the Af / Ac method, but sometimes this method shows uncertainty result in several geothermal wells and also this prediction method does not take into account the flowing time of geothermal fluid to discharge after opening the well head. This paper presents a simulation of non-discharges well under air compress stimulation to predict well behavior and time process required. The component of this model consists of geothermal well data during heating-up process such as pressure, temperature and mass flow in the water column and main feed zone level. The one-dimensional transient numerical model is run based on the Single Fluid Volume Element (SFVE) method. According to the simulation result, the geothermal well behavior prediction after air compress stimulation will be valid under two specific circumstances, such as single phase fluid density between 1 - 28 kg/m3 and above 28.5 kg/m3. The first condition shows that successful well discharge and the last condition represent failed well discharge after air compress stimulation (only for two wells data). The comparison of pf values between simulation and field observation shows the different result according to the success discharge well. Time required for flow to occur as observed in well head by using the SFVE method is different with the actual field condition. This model needs to improve by updating more geothermal well data and modified fluid phase condition inside the wellbore.
Huang, Z J; Merkle, C L; Abdallah, S; Tarbell, J M
1994-04-01
Heart valves induce flow disturbances which play a role in blood cell activation and damage, but questions of the magnitude and spatial distribution of fluid stresses (wall shear stress and turbulent stress) cannot be readily addressed with current experimental techniques. Therefore, a numerical simulation procedure for flow through artificial heart valves is presented. The algorithm employed is based on the Navier-Stokes equations in generalized curvilinear coordinates with artificial compressibility for coupling of velocity and pressure. The algorithm applies a finite-difference technique on a body-conforming composite grid around the heart valve disk on which the numerical simulations are performed. Steady laminar flow over a backward-facing step and unsteady laminar flow inside a square driven cavity are computed to validate the algorithm. Two-dimensional, time-accurate simulation of flow through a tilting disk valve with a steady upstream Reynolds number as high as 1000 reveals the complex behavior of 'vortex shedding'. By scaling the results at the Reynolds number of 1000 to peak systolic flow conditions, the maximum value of shear stress on the valve disk is estimated to be 770 dyn cm-2. The 'apparent' Reynolds stress associated with vortex shedding is estimated to be as high as 3900 dyn cm-2 with a vortex shedding frequency of about 26 Hz. The 'apparent' Reynolds stress value is of similar magnitude as reported in experiments but would not be expected to damage blood cells because the spatial scales associated with vortex shedding are much larger than blood cell dimensions. PMID:8188720
A numerical tool for reproducing driver behaviour: experiments and predictive simulations.
Casucci, M; Marchitto, M; Cacciabue, P C
2010-03-01
This paper presents the simulation tool called SDDRIVE (Simple Simulation of Driver performance), which is the numerical computerised implementation of the theoretical architecture describing Driver-Vehicle-Environment (DVE) interactions, contained in Cacciabue and Carsten [Cacciabue, P.C., Carsten, O. A simple model of driver behaviour to sustain design and safety assessment of automated systems in automotive environments, 2010]. Following a brief description of the basic algorithms that simulate the performance of drivers, the paper presents and discusses a set of experiments carried out in a Virtual Reality full scale simulator for validating the simulation. Then the predictive potentiality of the tool is shown by discussing two case studies of DVE interactions, performed in the presence of different driver attitudes in similar traffic conditions. PMID:19249745
Numerical prediction of pressure fluctuations in a prototype pump turbine base on PANS methods
NASA Astrophysics Data System (ADS)
Liu, J. T.; Li, Y.; Gao, Y.; Hu, Q.; Wu, Y. L.
2016-05-01
Unsteady flow and pressure fluctuations within a prototypel pump turbine are numerically studied using a nonlinear Partial Averaged Navier Stokes (PANS) model. Pump turbine operating at different conditions with guide vanes opening angle 6° is simulated. Results revealed that the predictions of performance and relative peak-to-peak amplitude by PANS approach agree well with the experimental data. The amplitude of the pressure fluctuation in the vaneless space at turbine mode on a “S” curve increases with the decrease of the flow rate, and it has maximum value when it runs close to runaway line at turbine braking mode. The amplitude of the pressure fluctuation in the vaneless space at turbine braking mode on a “S” curve decreases with the reduce of the flow rate. The above high pressure fluctuations should be avoided during the design of pump turbines especially those operating at high-head condition.
An improved technique for global solar radiation estimation using numerical weather prediction
NASA Astrophysics Data System (ADS)
Shamim, M. A.; Remesan, R.; Bray, M.; Han, D.
2015-07-01
Global solar radiation is the driving force in hydrological cycle especially for evapotranspiration (ET) and is quite infrequently measured. This has led to the reliance on indirect techniques of estimation for data scarce regions. This study presents an improved technique that uses information from a numerical weather prediction (NWP) model (National Centre for Atmospheric Research NCAR's Mesoscale Meteorological model version 5 MM5), for the determination of a cloud cover index (CI), a major factor in the attenuation of the incident solar radiation. The cloud cover index (CI) together with the atmospheric transmission factor (KT) and output from a global clear sky solar radiation were then used for the estimation of global solar radiation for the Brue catchment located in the southwest of England. The results clearly show an improvement in the estimated global solar radiation in comparison to the prevailing approaches.
NASA Technical Reports Server (NTRS)
Cohn, S. E.
1982-01-01
Numerical weather prediction (NWP) is an initial-value problem for a system of nonlinear differential equations, in which initial values are known incompletely and inaccurately. Observational data available at the initial time must therefore be supplemented by data available prior to the initial time, a problem known as meteorological data assimilation. A further complication in NWP is that solutions of the governing equations evolve on two different time scales, a fast one and a slow one, whereas fast scale motions in the atmosphere are not reliably observed. This leads to the so called initialization problem: initial values must be constrained to result in a slowly evolving forecast. The theory of estimation of stochastic dynamic systems provides a natural approach to such problems. For linear stochastic dynamic models, the Kalman-Bucy (KB) sequential filter is the optimal data assimilation method, for linear models, the optimal combined data assimilation-initialization method is a modified version of the KB filter.
NASA Astrophysics Data System (ADS)
Hales, Joel M.; Khachatrian, Ani; Roche, Nicolas J.; Buchner, Stephen; Warner, Jeffrey; McMorrow, Dale
2016-05-01
Two numerical approaches for determining the charge generated in semiconductors via two-photon absorption (2PA) under conditions relevant for laser-based single-event effects (SEE) experiments are presented. The first approach uses a simple analytical expression incorporating a small number of experimental/material parameters while the second approach employs a comprehensive beam propagation method that accounts for all the complex nonlinear optical (NLO) interactions present. The impact of the excitation conditions, device geometry, and specific NLO interactions on the resulting collected charge in silicon devices is also discussed. These approaches can provide value to the radiation-effects community by predicting the impacts that varying experimental parameters will have on 2PA SEE measurements.
Defect reaction network in Si-doped InP : numerical predictions.
Schultz, Peter Andrew
2013-10-01
This Report characterizes the defects in the defect reaction network in silicon-doped, n-type InP deduced from first principles density functional theory. The reaction network is deduced by following exothermic defect reactions starting with the initially mobile interstitial defects reacting with common displacement damage defects in Si-doped InP until culminating in immobile reaction products. The defect reactions and reaction energies are tabulated, along with the properties of all the silicon-related defects in the reaction network. This Report serves to extend the results for intrinsic defects in SAND 2012-3313: %E2%80%9CSimple intrinsic defects in InP: Numerical predictions%E2%80%9D to include Si-containing simple defects likely to be present in a radiation-induced defect reaction sequence.
Harrison, Thomas; Ruiz, Jaime; Sloan, Daniel B; Ben-Hur, Asa; Boucher, Christina
2016-01-01
Pentatricopeptide repeat containing proteins (PPRs) bind to RNA transcripts originating from mitochondria and plastids. There are two classes of PPR proteins. The [Formula: see text] class contains tandem [Formula: see text]-type motif sequences, and the [Formula: see text] class contains alternating [Formula: see text], [Formula: see text] and [Formula: see text] type sequences. In this paper, we describe a novel tool that predicts PPR-RNA interaction; specifically, our method, which we call aPPRove, determines where and how a [Formula: see text]-class PPR protein will bind to RNA when given a PPR and one or more RNA transcripts by using a combinatorial binding code for site specificity proposed by Barkan et al. Our results demonstrate that aPPRove successfully locates how and where a PPR protein belonging to the [Formula: see text] class can bind to RNA. For each binding event it outputs the binding site, the amino-acid-nucleotide interaction, and its statistical significance. Furthermore, we show that our method can be used to predict binding events for [Formula: see text]-class proteins using a known edit site and the statistical significance of aligning the PPR protein to that site. In particular, we use our method to make a conjecture regarding an interaction between CLB19 and the second intronic region of ycf3. The aPPRove web server can be found at www.cs.colostate.edu/~approve. PMID:27560805
Harrison, Thomas; Ruiz, Jaime; Sloan, Daniel B.; Ben-Hur, Asa; Boucher, Christina
2016-01-01
Pentatricopeptide repeat containing proteins (PPRs) bind to RNA transcripts originating from mitochondria and plastids. There are two classes of PPR proteins. The P class contains tandem P-type motif sequences, and the PLS class contains alternating P, L and S type sequences. In this paper, we describe a novel tool that predicts PPR-RNA interaction; specifically, our method, which we call aPPRove, determines where and how a PLS-class PPR protein will bind to RNA when given a PPR and one or more RNA transcripts by using a combinatorial binding code for site specificity proposed by Barkan et al. Our results demonstrate that aPPRove successfully locates how and where a PPR protein belonging to the PLS class can bind to RNA. For each binding event it outputs the binding site, the amino-acid-nucleotide interaction, and its statistical significance. Furthermore, we show that our method can be used to predict binding events for PLS-class proteins using a known edit site and the statistical significance of aligning the PPR protein to that site. In particular, we use our method to make a conjecture regarding an interaction between CLB19 and the second intronic region of ycf3. The aPPRove web server can be found at www.cs.colostate.edu/~approve. PMID:27560805
NASA Astrophysics Data System (ADS)
Wu, Chengqing; Hao, Hong
2006-03-01
Accidental detonations in an underground ammunition storage chamber inside a rock mass may cause severe damage to the rock mass around the chamber, adjacent tunnels and chambers, ground surface, and in the worst case cause sympathetic detonation of explosives in adjacent storage chambers. To prevent such damage, underground ammunition storage chambers are often situated at minimum depth below the ground surface, and spaced at minimum distance from each other, so that damage, should it occur, is limited to the accidental chamber. Different codes and regulations for ammunition storage chambers specify minimum embedment depth and separation distance for underground ammunition storage chambers. They are usually given in terms of the rock mass properties and the weight of explosive stored in chambers. Some empirical formulae, usually based on the peak particle velocity of the stress wave or the maximum strain of the rock mass, are also available to estimate the damage zones in the rock mass from an explosion. All these empirical methods do not include the effects of explosion details, such as the loading density, chamber geometry and explosive distribution. In this paper, a previously calibrated numerical model is used to estimate the damage zones in a granite mass resulting from an accidental explosion in an underground ammunition storage chamber. Effects of various explosion conditions on rock mass damage are investigated. On the basis of the numerical results, some empirical formulae are derived to predict damage zones around the explosion chamber, as well as safe embedment depth of the storage chamber and safe separation distance between adjacent chambers. The numerical results are also compared with available empirical formulae and code specifications. It should be noted that the characteristics of stress wave propagation around an ammunition storage chamber has been published in a preceding paper (Int. J. Blast. Fragm. 5:57 90, 2001.
NASA Astrophysics Data System (ADS)
Ochrymiuk, Tomasz
2016-06-01
Numerical simulations were performed to predict the film cooling effectiveness on the fiat plate with a three- dimensional discrete-hole film cooling arrangement. The effects of basic geometrical characteristics of the holes, i.e. diameter D, length L and pitch S/D were studied. Different turbulent heat transfer models based on constant and variable turbulent Prandtl number approaches were considered. The variability of the turbulent Prandtl number Pr t in the energy equation was assumed using an algebraic relation proposed by Kays and Crawford, or employing the Abe, Kondoh and Nagano eddy heat diffusivity closure with two differential transport equations for the temperature variance k θ and its destruction rate ɛ θ . The obtained numerical results were directly compared with the data that came from an experiment based on Transient Liquid Crystal methodology. All implemented models for turbulent heat transfer performed sufficiently well for the considered case. It was confirmed, however, that the two- equation closure can give a detailed look into film cooling problems without using any time-consuming and inherently unsteady models.
Verification of Numerical Weather Prediction Model Results for Energy Applications in Latvia
NASA Astrophysics Data System (ADS)
Sīle, Tija; Cepite-Frisfelde, Daiga; Sennikovs, Juris; Bethers, Uldis
2014-05-01
A resolution to increase the production and consumption of renewable energy has been made by EU governments. Most of the renewable energy in Latvia is produced by Hydroelectric Power Plants (HPP), followed by bio-gas, wind power and bio-mass energy production. Wind and HPP power production is sensitive to meteorological conditions. Currently the basis of weather forecasting is Numerical Weather Prediction (NWP) models. There are numerous methodologies concerning the evaluation of quality of NWP results (Wilks 2011) and their application can be conditional on the forecast end user. The goal of this study is to evaluate the performance of Weather Research and Forecast model (Skamarock 2008) implementation over the territory of Latvia, focusing on forecasting of wind speed and quantitative precipitation forecasts. The target spatial resolution is 3 km. Observational data from Latvian Environment, Geology and Meteorology Centre are used. A number of standard verification metrics are calculated. The sensitivity to the model output interpretation (output spatial interpolation versus nearest gridpoint) is investigated. For the precipitation verification the dichotomous verification metrics are used. Sensitivity to different precipitation accumulation intervals is examined. Skamarock, William C. and Klemp, Joseph B. A time-split nonhydrostatic atmospheric model for weather research and forecasting applications. Journal of Computational Physics. 227, 2008, pp. 3465-3485. Wilks, Daniel S. Statistical Methods in the Atmospheric Sciences. Third Edition. Academic Press, 2011.
Libertus, Melissa E.; Feigenson, Lisa; Halberda, Justin
2013-01-01
Previous research has found a relationship between individual differences in children’s precision when nonverbally approximating quantities and their school mathematics performance. School mathematics performance emerges from both informal (e.g., counting) and formal (e.g., knowledge of mathematics facts) abilities. It remains unknown whether approximation precision relates to both of these types of mathematics abilities. In the present study we assessed the precision of numerical approximation in 85 3- to 7-year-old children four times over a span of two years. Additionally, at the last time point, we tested children’s informal and formal mathematics abilities using the Test of Early Mathematics Ability (TEMA-3; Ginsburg & Baroody, 2003). We found that children’s numerical approximation precision correlated with and predicted their informal, but not formal, mathematics abilities when controlling for age and IQ. These results add to our growing understanding of the relationship between an unlearned, non-symbolic system of quantity representation and the system of mathematical reasoning that children come to master through instruction. PMID:24076381
2015-01-01
Background Accurately predicting the binding affinities of large sets of protein-ligand complexes is a key challenge in computational biomolecular science, with applications in drug discovery, chemical biology, and structural biology. Since a scoring function (SF) is used to score, rank, and identify drug leads, the fidelity with which it predicts the affinity of a ligand candidate for a protein's binding site has a significant bearing on the accuracy of virtual screening. Despite intense efforts in developing conventional SFs, which are either force-field based, knowledge-based, or empirical, their limited predictive power has been a major roadblock toward cost-effective drug discovery. Therefore, in this work, we present novel SFs employing a large ensemble of neural networks (NN) in conjunction with a diverse set of physicochemical and geometrical features characterizing protein-ligand complexes to predict binding affinity. Results We assess the scoring accuracies of two new ensemble NN SFs based on bagging (BgN-Score) and boosting (BsN-Score), as well as those of conventional SFs in the context of the 2007 PDBbind benchmark that encompasses a diverse set of high-quality protein families. We find that BgN-Score and BsN-Score have more than 25% better Pearson's correlation coefficient (0.804 and 0.816 vs. 0.644) between predicted and measured binding affinities compared to that achieved by a state-of-the-art conventional SF. In addition, these ensemble NN SFs are also at least 19% more accurate (0.804 and 0.816 vs. 0.675) than SFs based on a single neural network that has been traditionally used in drug discovery applications. We further find that ensemble models based on NNs surpass SFs based on the decision-tree ensemble technique Random Forests. Conclusions Ensemble neural networks SFs, BgN-Score and BsN-Score, are the most accurate in predicting binding affinity of protein-ligand complexes among the considered SFs. Moreover, their accuracies are even higher
NASA Astrophysics Data System (ADS)
Ostoich, Christopher Mark
due to a dome-induced horseshoe vortex scouring the panel's surface. Comparisons with reduced-order models of heat transfer indicate that they perform with varying levels of accuracy around some portions of the geometry while completely failing to predict significant heat loads in re- gions where the dome-influenced flow impacts the ceramic panel. Cumulative effects of flow-thermal coupling at later simulation times on the reduction of panel drag and surface heat transfer are quantified. The second fluid-structure study investigates the interaction between a thin metallic panel and a Mach 2.25 turbulent boundary layer with an ini- tial momentum thickness Reynolds number of 1200. A transient, non-linear, large deformation, 3D finite element solver is developed to compute the dynamic response of the panel. The solver is coupled at the fluid-structure interface with the compressible Navier-Stokes solver, the latter of which is used for a direct numerical simulation of the turbulent boundary layer. In this approach, no simplifying assumptions regarding the structural solution or turbulence modeling are made in order to get detailed solution data. It is found that the thin panel state evolves into a flutter type response char- acterized by high-amplitude, high-frequency oscillations into the flow. The oscillating panel disturbs the supersonic flow by introducing compression waves, modifying the turbulence, and generating fluctuations in the power exiting the top of the flow domain. The work in this thesis serves as a step forward in structural response prediction in high-speed flows. The results demonstrate the ability of high- fidelity numerical approaches to serve as a guide for reduced-order model improvement and as well as provide accurate and detailed solution data in scenarios where experimental approaches are difficult or impossible.
Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz
2014-01-01
Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs. PMID:25024412
Numerical prediction of flow induced fibers orientation in injection molded polymer composites
NASA Astrophysics Data System (ADS)
Oumer, A. N.; Hamidi, N. M.; Mat Sahat, I.
2015-12-01
Since the filling stage of injection molding process has important effect on the determination of the orientation state of the fibers, accurate analysis of the flow field for the mold filling stage becomes a necessity. The aim of the paper is to characterize the flow induced orientation state of short fibers in injection molding cavities. A dog-bone shaped model is considered for the simulation and experiment. The numerical model for determination of the fibers orientation during mold-filling stage of injection molding process was solved using Computational Fluid Dynamics (CFD) software called MoldFlow. Both the simulation and experimental results showed that two different regions (or three layers of orientation structures) across the thickness of the specimen could be found: a shell region which is near to the mold cavity wall, and a core region at the middle of the cross section. The simulation results support the experimental observations that for thin plates the probability of fiber alignment to the flow direction near the mold cavity walls is high but low at the core region. It is apparent that the results of this study could assist in decisions regarding short fiber reinforced polymer composites.
Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz
2014-08-13
Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs. PMID:25024412
Keshavarz, Mohammad Hossein; Gharagheizi, Farhad; Shokrolahi, Arash; Zakinejad, Sajjad
2012-10-30
Most of benzoic acid derivatives are toxic, which may cause serious public health and environmental problems. Two novel simple and reliable models are introduced for desk calculations of the toxicity of benzoic acid compounds in mice via oral LD(50) with more reliance on their answers as one could attach to the more complex outputs. They require only elemental composition and molecular fragments without using any computer codes. The first model is based on only the number of carbon and hydrogen atoms, which can be improved by several molecular fragments in the second model. For 57 benzoic compounds, where the computed results of quantitative structure-toxicity relationship (QSTR) were recently reported, the predicted results of two simple models of present method are more reliable than QSTR computations. The present simple method is also tested with further 324 benzoic acid compounds including complex molecular structures, which confirm good forecasting ability of the second model. PMID:22959133
Samudrala, Ram; Heffron, Fred; McDermott, Jason E.
2009-04-24
The type III secretion system is an essential component for virulence in many Gram-negative bacteria. Though components of the secretion system apparatus are conserved, its substrates, effector proteins, are not. We have used a machine learning approach to identify new secreted effectors. The method integrates evolutionary measures, such as the pattern of homologs in a range of other organisms, and sequence-based features, such as G+C content, amino acid composition and the N-terminal 30 residues of the protein sequence. The method was trained on known effectors from Salmonella typhimurium and validated on a corresponding set of effectors from Pseudomonas syringae, after eliminating effectors with detectable sequence similarity. The method was able to identify all of the known effectors in P. syringae with a specificity of 84% and sensitivity of 82%. The reciprocal validation, training on P. syringae and validating on S. typhimurium, gave similar results with a specificity of 86% when the sensitivity level was 87%. These results show that type III effectors in disparate organisms share common features. We found that maximal performance is attained by including an N-terminal sequence of only 30 residues, which agrees with previous studies indicating that this region contains the secretion signal. We then used the method to define the most important residues in this putative secretion signal. Finally, we present novel predictions of secreted effectors in S. typhimurium, some of which have been experimentally validated, and apply the method to predict secreted effectors in the genetically intractable human pathogen Chlamydia trachomatis. This approach is a novel and effective way to identify secreted effectors in a broad range of pathogenic bacteria for further experimental characterization and provides insight into the nature of the type III secretion signal.
Numerical prediction of radiation heat transfer in optoelectronics hermetic packaging process
NASA Astrophysics Data System (ADS)
Saha, Chinmoy P.; Zhang, Daming; Liu, Sheng
2005-03-01
Hermetic packaging of the high-speed optoelectronics devices is important not only for robustness but also to protect the device from adverse operational environments and ensure reliable communications. We have developed a complete hermetic packaging assembly process for a photonic Mini-DIL module of 10.0Gbps type. We have developed and simulated the step by step fluxless reflow soldering process (pick and place) of the whole mini-module package and finally, the hermetic sealing by Finite Element Analysis (FEA) simulation. A commercially available, general purpose, finite element program - ABAQUS has been used along with Altair HyperWorks as pre and post processor for this numerical simulation. The actual 3-D model has been simplified to the 2-D model for the hermetic sealing, radiation heat transfer prediction to reduce computational complicacy. During the sealing process at a high temperature, there is a possibility of considerable heat transfer from the module top sealing cap to the high temperature susceptible LD (Laser Diode). In the event of a critical temperature the LD may suffer malfunction and eventual destruction. Radiation along with the conduction heat transfer mechanism has been modeled for this sealing to predict the temperature variation as a result of heat transfer from wledspots to the LD. Various issues with cavity radiations such as, effect of radiation view factor, surface blocking and surface emissivity have been considered and results discussed. The convection mechanism has been neglected considering the hermeticity of the sealing.
Hu, C.R. . Dept. of Physics)
1998-12-20
A fundamental topological consequence of the unconventional (i.e., non-s-wave) pairing symmetry of high-[Tc] superconductors (HTSC's) is the existence of midgap (quasi-particle) states (MS's) bound to surface,m interfaces and other locations. This prediction by the author has most-likely solved a decade-old puzzle, viz., the ubiquitous observation of a zero-bias conductance peak (ZBCP) in tunneling experiments performed on HTSC's. There are also numerous other novel consequences of these MS's, predicted by various researchers, including a new Josephson critical current term; an (already observed) low-temperature splitting of the ZBCP due possibly to a spontaneous breaking of the time-reversal symmetry at a sample surface; a new explanation of the paramagnetic Meissner effect; and a giant magnetic moment, etc. Here the author will review the physical origin of the MS's, the several extensions of the original idea and the many novel consequences of these MS's, some of which have been investigated quantitatively and some others only deduced in qualitative terms so far.
Numerical Prediction of the Thermodynamic Properties of Ternary Al-Ni-Pd Alloys
NASA Astrophysics Data System (ADS)
Zagula-Yavorska, Maryana; Romanowska, Jolanta; Kotowski, Sławomir; Sieniawski, Jan
2016-01-01
Thermodynamic properties of ternary Al-Ni-Pd system, such as exGAlNPd, µAl(AlNiPd), µNi(AlNiPd) and µPd(AlNiPd) at 1,373 K, were predicted on the basis of thermodynamic properties of binary systems included in the investigated ternary system. The idea of predicting exGAlNiPd values was regarded as calculation of values of the exG function inside a certain area (a Gibbs triangle) unless all boundary conditions, that is values of exG on all legs of the triangle are known (exGAlNi, exGAlPd, exGNiPd). This approach is contrary to finding a function value outside a certain area, if the function value inside this area is known. exG and LAl,Ni,Pd ternary interaction parameters in the Muggianu extension of the Redlich-Kister formalism were calculated numerically using the Excel program and Solver. The accepted values of the third component xx differed from 0.01 to 0.1 mole fraction. Values of LAlNiPd parameters in the Redlich-Kister formula are different for different xx values, but values of thermodynamic functions: exGAlNiPd, µAl(AlNiPd), µNi(AlNiPd) and µPd(AlNiPd) do not differ significantly for different xx values. The choice of xx value does not influence the accuracy of calculations.
ERIC Educational Resources Information Center
Powell, Erica Dion
2013-01-01
This study presents a survey developed to measure the skills of entering college freshmen in the areas of responsibility, motivation, study habits, literacy, and stress management, and explores the predictive power of this survey as a measure of academic performance during the first semester of college. The survey was completed by 334 incoming…
Fortin, Élise; Platt, Robert W.; Fontela, Patricia S.; Buckeridge, David L.; Quach, Caroline
2015-01-01
Objective The optimal way to measure antimicrobial use in hospital populations, as a complement to surveillance of resistance is still unclear. Using respiratory isolates and antimicrobial prescriptions of nine intensive care units (ICUs), this study aimed to identify the indicator of antimicrobial use that predicted prevalence and incidence rates of resistance with the best accuracy. Methods Retrospective cohort study including all patients admitted to three neonatal (NICU), two pediatric (PICU) and four adult ICUs between April 2006 and March 2010. Ten different resistance / antimicrobial use combinations were studied. After adjustment for ICU type, indicators of antimicrobial use were successively tested in regression models, to predict resistance prevalence and incidence rates, per 4-week time period, per ICU. Binomial regression and Poisson regression were used to model prevalence and incidence rates, respectively. Multiplicative and additive models were tested, as well as no time lag and a one 4-week-period time lag. For each model, the mean absolute error (MAE) in prediction of resistance was computed. The most accurate indicator was compared to other indicators using t-tests. Results Results for all indicators were equivalent, except for 1/20 scenarios studied. In this scenario, where prevalence of carbapenem-resistant Pseudomonas sp. was predicted with carbapenem use, recommended daily doses per 100 admissions were less accurate than courses per 100 patient-days (p = 0.0006). Conclusions A single best indicator to predict antimicrobial resistance might not exist. Feasibility considerations such as ease of computation or potential external comparisons could be decisive in the choice of an indicator for surveillance of healthcare antimicrobial use. PMID:26710322
Sediment pulses in mountain rivers: 2. Comparison between experiments and numerical predictions
NASA Astrophysics Data System (ADS)
Cui, Yantao; Parker, Gary; Pizzuto, James; Lisle, Thomas E.
2003-09-01
Mountain rivers in particular are prone to sediment input in the form of pulses rather than a more continuous supply. These pulses often enter in the form of landslides from adjacent hillslopes or debris flows from steeper tributaries. The activities of humans such as timber harvesting, road building, and urban development can increase the frequency of sediment pulses. The question as to how mountain rivers accommodate pulses of sediment thus becomes of practical as well as academic significance. In part 1 [, 2003], the results of three laboratory experiments on sediment pulses are reported. It was found there that the pulses were eliminated from the flume predominantly by dispersion of the topographic high. Significant translation was observed only when the pulse material was substantially finer than the ambient load in the river. Here the laboratory data are used to test a numerical model originally devised for predicting the evolution of sediment pulses in field-scale gravel bed streams. The model successfully reproduces the predominantly dispersive deformation of the experimental pulses. Rates of dispersion are generally underestimated, largely because bed load transport rates are underestimated by the transport equation used in the model. The model reproduces the experimental data best when the pulse is significantly coarser than the ambient sediment. In this case, the model successfully predicts the formation and downstream progradation of a delta that formed in the backwater zone of the pulse in run 3. The performance of the model is less successful when the pulse is composed primarily of sand. This is likely because the bed load equation used in the study is specifically designed for gravel. When the model is adapted to conditions characteristic of large, sand bed rivers with low Froude numbers, it predicts substantial translation of pulses as well as dispersion.
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.; Conrad, Joy
1996-01-01
The geomagnetic spatial power spectrum R(sub n)(r) is the mean square magnetic induction represented by degree n spherical harmonic coefficients of the internal scalar potential averaged over the geocentric sphere of radius r. McLeod's Rule for the magnetic field generated by Earth's core geodynamo says that the expected core surface power spectrum (R(sub nc)(c)) is inversely proportional to (2n + 1) for 1 less than n less than or equal to N(sub E). McLeod's Rule is verified by locating Earth's core with main field models of Magsat data; the estimated core radius of 3485 kn is close to the seismologic value for c of 3480 km. McLeod's Rule and similar forms are then calibrated with the model values of R(sub n) for 3 less than or = n less than or = 12. Extrapolation to the degree 1 dipole predicts the expectation value of Earth's dipole moment to be about 5.89 x 10(exp 22) Am(exp 2)rms (74.5% of the 1980 value) and the expected geomagnetic intensity to be about 35.6 (mu)T rms at Earth's surface. Archeo- and paleomagnetic field intensity data show these and related predictions to be reasonably accurate. The probability distribution chi(exp 2) with 2n+1 degrees of freedom is assigned to (2n + 1)R(sub nc)/(R(sub nc). Extending this to the dipole implies that an exceptionally weak absolute dipole moment (less than or = 20% of the 1980 value) will exist during 2.5% of geologic time. The mean duration for such major geomagnetic dipole power excursions, one quarter of which feature durable axial dipole reversal, is estimated from the modern dipole power time-scale and the statistical model of excursions. The resulting mean excursion duration of 2767 years forces us to predict an average of 9.04 excursions per million years, 2.26 axial dipole reversals per million years, and a mean reversal duration of 5533 years. Paleomagnetic data show these predictions to be quite accurate. McLeod's Rule led to accurate predictions of Earth's core radius, mean paleomagnetic field
Vlaming, M L H; van Duijn, E; Dillingh, M R; Brands, R; Windhorst, A D; Hendrikse, N H; Bosgra, S; Burggraaf, J; de Koning, M C; Fidder, A; Mocking, J A J; Sandman, H; de Ligt, R A F; Fabriek, B O; Pasman, W J; Seinen, W; Alves, T; Carrondo, M; Peixoto, C; Peeters, P A M; Vaes, W H J
2015-08-01
Preclinical development of new biological entities (NBEs), such as human protein therapeutics, requires considerable expenditure of time and costs. Poor prediction of pharmacokinetics in humans further reduces net efficiency. In this study, we show for the first time that pharmacokinetic data of NBEs in humans can be successfully obtained early in the drug development process by the use of microdosing in a small group of healthy subjects combined with ultrasensitive accelerator mass spectrometry (AMS). After only minimal preclinical testing, we performed a first-in-human phase 0/phase 1 trial with a human recombinant therapeutic protein (RESCuing Alkaline Phosphatase, human recombinant placental alkaline phosphatase [hRESCAP]) to assess its safety and kinetics. Pharmacokinetic analysis showed dose linearity from microdose (53 μg) [(14) C]-hRESCAP to therapeutic doses (up to 5.3 mg) of the protein in healthy volunteers. This study demonstrates the value of a microdosing approach in a very small cohort for accelerating the clinical development of NBEs. PMID:25869840
NASA Astrophysics Data System (ADS)
Delahaye, Thibault; Nikitin, Andrei; Rey, Michaël; Szalay, Péter G.; Tyuterev, Vladimir G.
2014-09-01
In this paper we report a new ground state potential energy surface for ethylene (ethene) C2H4 obtained from extended ab initio calculations. The coupled-cluster approach with the perturbative inclusion of the connected triple excitations CCSD(T) and correlation consistent polarized valence basis set cc-pVQZ was employed for computations of electronic ground state energies. The fit of the surface included 82 542 nuclear configurations using sixth order expansion in curvilinear symmetry-adapted coordinates involving 2236 parameters. A good convergence for variationally computed vibrational levels of the C2H4 molecule was obtained with a RMS(Obs.-Calc.) deviation of 2.7 cm-1 for fundamental bands centers and 5.9 cm-1 for vibrational bands up to 7800 cm-1. Large scale vibrational and rotational calculations for 12C2H4, 13C2H4, and 12C2D4 isotopologues were performed using this new surface. Energy levels for J = 20 up to 6000 cm-1 are in a good agreement with observations. This represents a considerable improvement with respect to available global predictions of vibrational levels of 13C2H4 and 12C2D4 and rovibrational levels of 12C2H4.
NASA Astrophysics Data System (ADS)
Byun, Jaeseung; Bodony, Daniel; Pantano, Carlos
2014-11-01
Improved order-of-accuracy discretizations often require careful consideration of their numerical stability. We report on new high-order finite difference schemes using Summation-By-Parts (SBP) operators along with the Simultaneous-Approximation-Terms (SAT) boundary condition treatment for first and second-order spatial derivatives with variable coefficients. In particular, we present a highly accurate operator for SBP-SAT-based approximations of second-order derivatives with variable coefficients for Dirichlet and Neumann boundary conditions. These terms are responsible for approximating the physical dissipation of kinetic and thermal energy in a simulation, and contain grid metrics when the grid is curvilinear. Analysis using the Laplace transform method shows that strong stability is ensured with Dirichlet boundary conditions while weaker stability is obtained for Neumann boundary conditions. Furthermore, the benefits of the scheme is shown in the direct numerical simulation (DNS) of a Mach 1.5 compressible turbulent supersonic jet using curvilinear grids and skew-symmetric discretization. Particularly, we show that the improved methods allow minimization of the numerical filter often employed in these simulations and we discuss the qualities of the simulation.
Ryan, Natalia; Chorley, Brian; Tice, Raymond R; Judson, Richard; Corton, J Christopher
2016-05-01
Microarray profiling of chemical-induced effects is being increasingly used in medium- and high-throughput formats. Computational methods are described here to identify molecular targets from whole-genome microarray data using as an example the estrogen receptor α (ERα), often modulated by potential endocrine disrupting chemicals. ERα biomarker genes were identified by their consistent expression after exposure to 7 structurally diverse ERα agonists and 3 ERα antagonists in ERα-positive MCF-7 cells. Most of the biomarker genes were shown to be directly regulated by ERα as determined by ESR1 gene knockdown using siRNA as well as through chromatin immunoprecipitation coupled with DNA sequencing analysis of ERα-DNA interactions. The biomarker was evaluated as a predictive tool using the fold-change rank-based Running Fisher algorithm by comparison to annotated gene expression datasets from experiments using MCF-7 cells, including those evaluating the transcriptional effects of hormones and chemicals. Using 141 comparisons from chemical- and hormone-treated cells, the biomarker gave a balanced accuracy for prediction of ERα activation or suppression of 94% and 93%, respectively. The biomarker was able to correctly classify 18 out of 21 (86%) ER reference chemicals including "very weak" agonists. Importantly, the biomarker predictions accurately replicated predictions based on 18 in vitro high-throughput screening assays that queried different steps in ERα signaling. For 114 chemicals, the balanced accuracies were 95% and 98% for activation or suppression, respectively. These results demonstrate that the ERα gene expression biomarker can accurately identify ERα modulators in large collections of microarray data derived from MCF-7 cells. PMID:26865669
Narin, B; Ozyörük, Y; Ulas, A
2014-05-30
This paper describes a two-dimensional code developed for analyzing two-phase deflagration-to-detonation transition (DDT) phenomenon in granular, energetic, solid, explosive ingredients. The two-dimensional model is constructed in full two-phase, and based on a highly coupled system of partial differential equations involving basic flow conservation equations and some constitutive relations borrowed from some one-dimensional studies that appeared in open literature. The whole system is solved using an optimized high-order accurate, explicit, central-difference scheme with selective-filtering/shock capturing (SF-SC) technique, to augment central-diffencing and prevent excessive dispersion. The sources of the equations describing particle-gas interactions in terms of momentum and energy transfers make the equation system quite stiff, and hence its explicit integration difficult. To ease the difficulties, a time-split approach is used allowing higher time steps. In the paper, the physical model for the sources of the equation system is given for a typical explosive, and several numerical calculations are carried out to assess the developed code. Microscale intergranular and/or intragranular effects including pore collapse, sublimation, pyrolysis, etc. are not taken into account for ignition and growth, and a basic temperature switch is applied in calculations to control ignition in the explosive domain. Results for one-dimensional DDT phenomenon are in good agreement with experimental and computational results available in literature. A typical shaped-charge wave-shaper case study is also performed to test the two-dimensional features of the code and it is observed that results are in good agreement with those of commercial software. PMID:24721693
Wang, Shiyao; Deng, Zhidong; Yin, Gang
2016-01-01
A high-performance differential global positioning system (GPS) receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108
Wang, Shiyao; Deng, Zhidong; Yin, Gang
2016-01-01
A high-performance differential global positioning system (GPS) receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108
Volpato, Viola; Alshomrani, Badr; Pollastri, Gianluca
2015-01-01
Intrinsically-disordered regions lack a well-defined 3D structure, but play key roles in determining the function of many proteins. Although predictors of disorder have been shown to achieve relatively high rates of correct classification of these segments, improvements over the the years have been slow, and accurate methods are needed that are capable of accommodating the ever-increasing amount of structurally-determined protein sequences to try to boost predictive performances. In this paper, we propose a predictor for short disordered regions based on bidirectional recurrent neural networks and tested by rigorous five-fold cross-validation on a large, non-redundant dataset collected from MobiDB, a new comprehensive source of protein disorder annotations. The system exploits sequence and structural information in the forms of frequency profiles, predicted secondary structure and solvent accessibility and direct disorder annotations from homologous protein structures (templates) deposited in the Protein Data Bank. The contributions of sequence, structure and homology information result in large improvements in predictive accuracy. Additionally, the large scale of the training set leads to low false positive rates, making our systems a robust and efficient way to address high-throughput disorder prediction. PMID:26307973
Numerical prediction of transition of the F-16 wing at supersonic speeds
NASA Technical Reports Server (NTRS)
Cummings, Russell M.
1993-01-01
This work is part of the high speed research program currently underway at NASA. This project has the goal of gaining understanding of the technical requirements for supersonic-hypersonic flight. Specifically, this research is part of a continuing project to study the laminar flow over swept wings at high speeds and involves the numerical prediction of the flow about the F-16XL wing. The research uses the CNS/ARC3D codes and the resulting crossflow velocity components in order to estimate transition locations on the wing. Effects of angle of attack on the extent of laminar flows was found to be minimal. This result can be attributed to the fact that a laminar flow airfoil was used in this study, which has a continuous favorable pressure gradient over approximately the first 20 percent of the chord for angles of attacks up to 10 degrees. It should also be noted that even after 20 percent chord the pressure gradient either slowly continued to increase, but never decreased before 90 percent chord, except for the higher swept cases when separation occurs. Angles of attack greater than 10 degrees were not considered since this study assumes natural laminar flow for normal supersonic cruise flight conditions.
NASA Astrophysics Data System (ADS)
Seo, B. C.; Bradley, A.; Krajewski, W. F.
2015-12-01
The recent upgrade of dual-polarization with NEXRAD radars has assisted in improving the characterization of microphysical processes in precipitation and thus has enabled precipitation estimation based on the identified precipitation types. While this polarimetric capability promises the potential for the enhanced accuracy in quantitative precipitation estimation (QPE), recent studies show that the polarimetric estimates are still affected by uncertainties arising from the radar beam geometry/sampling space associated with the vertical variability of precipitation. The authors, first of all, focus on evaluating the NEXRAD hydrometeor classification product using ground reference data (e.g., ASOS) that provide simple categories of the observed precipitation types (e.g., rain, snow, and freezing rain). They also investigate classification uncertainty features caused by the variability of precipitation between the ground and the altitudes where radar samples. Since this variability is closely related to the atmospheric conditions (e.g., temperature) at near surface, useful information (e.g., critical thickness and temperature profile) that is not available in radar observations is retrieved from the numerical weather prediction (NWP) model data such as Rapid Refresh (RAP)/High Resolution Rapid Refresh (HRRR). The NWP retrieved information and polarimetric radar data are used together to improve the accuracy of precipitation type identification at near surface. The authors highlight major improvements and discuss limitations in the real-time application.
Ramos, A; Duarte, R J; Mesnard, M
2015-05-01
The fixation of commercial temporomandibular joint (TMJ) implant is accomplished by using screws, which, in some cases, can lead to loosening of the implant. The aim of this study was to predict the evolution of fixation success of a TMJ. Numerical models using a Christensen TMJ implant were developed to analyze strain distributions in the adjacent mandibular bone. The geometry of a human mandible was developed based on computed tomography (CT) scans from a cadaveric mandible on which a TMJ implant was subsequently placed. In this study, the five most important muscle forces acting were applied and the anatomical conditions replicated. The evolution of fixation was defined according to bone response methodology focused in strain distribution around the screws. Strain and micromotions were analyzed to evaluate implant stability, and the evolution process conduct at three different stages: start with all nine screws in place (initial stage); middle stage, with three screws removed (middle stage), and end stage, with only three screws in place (final stage). With regard to loosening, the implant success fixation changed the strains in the bone between 21% and 30%, when considering the last stage. The most important screw positions were #1, #7, and #9. It was observed that, despite the commercial Christensen TMJ implant providing nine screw positions for fixation, only three screws were necessary to ensure implant stability and fixation success. PMID:25819477
NASA Astrophysics Data System (ADS)
Dobslaw, H.
2016-07-01
Global surface pressure grids from 14.5 years of 6-hourly analyses out of both the operational ECMWF weather prediction model and ERA-Interim are mapped to a common reference orography by means of ECMWF's mean sea-level pressure diagnostic. The approach reduces both relative biases and residual variability by about one order of magnitude and thereby achieves a consistency among both data sets at the level of about 1 hPa. Remaining differences rather reflect temperature biases and also resolution limitations of the reanalysis data set, but are not anymore related to the local roughness in orography or to changes in the spatial resolution of the operational model. The presented reduction method therefore allows to obtain surface pressure time series with the long-time consistency of a reanalysis from an operational numerical weather model with much higher resolution and much shorter latency, making the results suitable for geodetic near realtime applications requiring continuously updated time series that are homogeneous over many years.
A case study of GOES-15 imager bias characterization with a numerical weather prediction model
NASA Astrophysics Data System (ADS)
Ren, Lu
2016-09-01
The infrared imager onboard the Geostationary Operational Environmental Satellite 15 (GOES-15) provides temporally continuous observations over a limited spatial domain. To quantify bias of the GOES-15 imager, observations from four infrared channels (2, 3, 4, and 6) are compared with simulations from the numerical weather prediction model and radiative transfer model. One-day clear-sky infrared observations from the GOES-15 imager over an oceanic domain during nighttime are selected. Two datasets, Global Forecast System (GFS) analysis and ERAInterim reanalysis, are used as inputs to the radiative transfer model. The results show that magnitudes of biases for the GOES-15 surface channels are approximately 1 K using two datasets, whereas the magnitude of bias for the GOES-15 water vapor channel can reach 5.5 K using the GFS dataset and 2.5 K using the ERA dataset. The GOES- 15 surface channels show positive dependencies on scene temperature, whereas the water vapor channel has a weak dependence on scene temperature. The strong dependence of bias on sensor zenith angle for the GOES-15 water vapor channel using GFS analysis implies large biases might exist in GFS water vapor profiles.
NASA Astrophysics Data System (ADS)
Plant, N. G.; Long, J.; Dalyander, S.; Thompson, D.; Miselis, J. L.
2013-12-01
Natural resource and hazard management of barrier islands requires an understanding of geomorphic changes associated with long-term processes and storms. Uncertainty exists in understanding how long-term processes interact with the geomorphic changes caused by storms and the resulting perturbations of the long-term evolution trajectories. We use high-resolution data sets to initialize and correct high-fidelity numerical simulations of oceanographic forcing and resulting barrier island evolution. We simulate two years of observed storms to determine the individual and cumulative impacts of these events. Results are separated into cross-shore and alongshore components of sediment transport and compared with observed topographic and bathymetric changes during these time periods. The discrete island change induced by these storms is integrated with previous knowledge of long-term net alongshore sediment transport to project island evolution. The approach has been developed and tested using data collected at the Chandeleur Island chain off the coast of Louisiana (USA). The simulation time period included impacts from tropical and winter storms, as well as a human-induced perturbation associated with construction of a sand berm along the island shoreline. The predictions and observations indicated that storm and long-term processes both contribute to the migration, lowering, and disintegration of the artificial berm and natural island. Further analysis will determine the relative importance of cross-shore and alongshore sediment transport processes and the dominant time scales that drive each of these processes and subsequent island morphologic response.
NASA Astrophysics Data System (ADS)
Shrestha, D. L.; Robertson, D.; Bennett, J.; Ward, P.; Wang, Q. J.
2012-12-01
Through the water information research and development alliance (WIRADA) project, CSIRO is conducting research to improve flood and short-term streamflow forecasting services delivered by the Australian Bureau of Meteorology. WIRADA aims to build and test systems to generate ensemble flood and short-term streamflow forecasts with lead times of up to 10 days by integrating rainfall forecasts from Numerical Weather Prediction (NWP) models and hydrological modelling. Here we present an overview of the latest progress towards developing this system. Rainfall during the forecast period is a major source of uncertainty in streamflow forecasting. Ensemble rainfall forecasts are used in streamflow forecasting to characterise the rainfall uncertainty. In Australia, NWP models provide forecasts of rainfall and other weather conditions for lead times of up to 10 days. However, rainfall forecasts from Australian NWP models are deterministic and often contain systematic errors. We use a simplified Bayesian joint probability (BJP) method to post-process rainfall forecasts from the latest generation of Australian NWP models. The BJP method generates reliable and skilful ensemble rainfall forecasts. The post-processed rainfall ensembles are then used to force a semi-distributed conceptual rainfall runoff model to produce ensemble streamflow forecasts. The performance of the ensemble streamflow forecasts is evaluated on a number of Australian catchments and the benefits of using post processed rainfall forecasts are demonstrated.
Numerical Prediction of Surface Heat Flux During Multiple Jets Firing for Missile Control
NASA Astrophysics Data System (ADS)
Saha, S.; Sinha, P. K.; Chakraborty, D.
2013-01-01
Numerical simulations are carried out to obtain the flowfield and heat flux arising out of the flow interactions of different control thrusters viz. vernier, pitch, yaw, roll and divert thruster among themselves as well as with free stream at different altitudes of operation. Three critical points on a typical trajectory of a missile are chosen and combinations of the thrusters operating at those conditions are considered. Simulations have also been performed to simulate DT motor interaction with free stream at different altitude. The interaction of different motor flowfield with the free stream presents a very complex flowfield. Flow gradients are very high close to the nozzle exit because of high altitude operation of motors. 3-Dimensional RANS equations are solved along with k-ɛ turbulence model on unstructured tetrahedral grid using commercial CFD software. The flow properties along with the surface heat flux distribution for four isothermal wall temperatures 350, 450, 550, 650 K are computed and provided for surface temperature prediction. It is observed that with increase in altitude, the high heat flux region reduces and heat transfer coefficient is independent of wall temperature.
A case study of GOES-15 imager bias characterization with a numerical weather prediction model
NASA Astrophysics Data System (ADS)
Ren, Lu
2016-04-01
The infrared imager onboard the Geostationary Operational Environmental Satellite 15 (GOES-15) provides temporally continuous observations over a limited spatial domain. To quantify bias of the GOES-15 imager, observations from four infrared channels (2, 3, 4, and 6) are compared with simulations from the numerical weather prediction model and radiative transfer model. One-day clear-sky infrared observations from the GOES-15 imager over an oceanic domain during nighttime are selected. Two datasets, Global Forecast System (GFS) analysis and ERAInterim reanalysis, are used as inputs to the radiative transfer model. The results show that magnitudes of biases for the GOES-15 surface channels are approximately 1 K using two datasets, whereas the magnitude of bias for the GOES-15 water vapor channel can reach 5.5 K using the GFS dataset and 2.5 K using the ERA dataset. The GOES- 15 surface channels show positive dependencies on scene temperature, whereas the water vapor channel has a weak dependence on scene temperature. The strong dependence of bias on sensor zenith angle for the GOES-15 water vapor channel using GFS analysis implies large biases might exist in GFS water vapor profiles.
Alfvenic Turbulence from the Sun to 65 Solar Radii: Numerical predictions.
NASA Astrophysics Data System (ADS)
Perez, J. C.; Chandran, B. D. G.
2015-12-01
The upcoming NASA Solar Probe Plus (SPP) mission will fly to within 9 solar radii from the solar surface, about 7 times closer to the Sun than any previous spacecraft has ever reached. This historic mission will gather unprecedented remote-sensing data and the first in-situ measurements of the plasma in the solar atmosphere, which will revolutionize our knowledge and understanding of turbulence and other processes that heat the solar corona and accelerate the solar wind. This close to the Sun the background solar-wind properties are highly inhomogeneous. As a result, outward-propagating Alfven waves (AWs) arising from the random motions of the photospheric magnetic-field footpoints undergo strong non-WKB reflections and trigger a vigorous turbulent cascade. In this talk I will discuss recent progress in the understanding of reflection-driven Alfven turbulence in this scenario by means of high-resolution numerical simulations, with the goal of predicting the detailed nature of the velocity and magnetic field fluctuations that the SPP mission will measure. In particular, I will place special emphasis on relating the simulations to relevant physical mechanisms that might govern the radial evolution of the turbulence spectra of outward/inward-propagating fluctuations and discuss the conditions that lead to universal power-laws.
NASA Astrophysics Data System (ADS)
Mulcahy, J. P.; Walters, D. N.; Bellouin, N.; Milton, S. F.
2014-05-01
The inclusion of the direct and indirect radiative effects of aerosols in high-resolution global numerical weather prediction (NWP) models is being increasingly recognised as important for the improved accuracy of short-range weather forecasts. In this study the impacts of increasing the aerosol complexity in the global NWP configuration of the Met Office Unified Model (MetUM) are investigated. A hierarchy of aerosol representations are evaluated including three-dimensional monthly mean speciated aerosol climatologies, fully prognostic aerosols modelled using the CLASSIC aerosol scheme and finally, initialised aerosols using assimilated aerosol fields from the GEMS project. The prognostic aerosol schemes are better able to predict the temporal and spatial variation of atmospheric aerosol optical depth, which is particularly important in cases of large sporadic aerosol events such as large dust storms or forest fires. Including the direct effect of aerosols improves model biases in outgoing long-wave radiation over West Africa due to a better representation of dust. However, uncertainties in dust optical properties propagate to its direct effect and the subsequent model response. Inclusion of the indirect aerosol effects improves surface radiation biases at the North Slope of Alaska ARM site due to lower cloud amounts in high-latitude clean-air regions. This leads to improved temperature and height forecasts in this region. Impacts on the global mean model precipitation and large-scale circulation fields were found to be generally small in the short-range forecasts. However, the indirect aerosol effect leads to a strengthening of the low-level monsoon flow over the Arabian Sea and Bay of Bengal and an increase in precipitation over Southeast Asia. Regional impacts on the African Easterly Jet (AEJ) are also presented with the large dust loading in the aerosol climatology enhancing of the heat low over West Africa and weakening the AEJ. This study highlights the
Olmedilla, Luis; Pérez-Peña, José María; Ripoll, Cristina; Garutti, Ignacio; de Diego, Roberto; Salcedo, Magdalena; Jiménez, Consuelo; Bañares, Rafael
2009-10-01
Early diagnosis of graft dysfunction in liver transplantation is essential for taking appropriate action. Indocyanine green clearance is closely related to liver function and can be measured noninvasively by spectrophotometry. The objectives of this study were to prospectively analyze the relationship between the indocyanine green plasma disappearance rate (ICGPDR) and early graft function after liver transplantation and to evaluate the role of ICGPDR in the prediction of severe graft dysfunction (SGD). One hundred seventy-two liver transplants from deceased donors were analyzed. Ten patients had SGD: 6 were retransplanted, and 4 died while waiting for a new graft. The plasma disappearance rate was measured 1 hour (PDRr60) and within the first 24 hours (PDR1) after reperfusion, and it was significantly lower in the SGD group. PDRr60 and PDR1 were excellent predictors of SGD. A threshold PDRr60 value of 10.8%/minute and a PDR1 value of 10%/minute accurately predicted SGD with areas under the receiver operating curve of 0.94 (95% confidence interval, 0.89-0.97) and 0.96 (95% confidence interval, 0.92-0.98), respectively. In addition, survival was significantly lower in patients with PDRr60 values below 10.8%/minute (53%, 47%, and 47% versus 95%, 94%, and 90% at 3, 6, and 12 months, respectively) and with PDR1 values below 10%/minute (62%, 62%, and 62% versus 94%, 92%, and 88%). In conclusion, very early noninvasive measurement of ICGPDR can accurately predict early severe graft dysfunction and mortality after liver transplantation. PMID:19790138
Johnson, B M; Guan, X; Gammie, C F
2008-06-24
The descriptions of some of the numerical tests in our original paper are incomplete, making reproduction of the results difficult. We provide the missing details here. The relevant tests are described in section 4 of the original paper (Figures 8-11).
NASA Astrophysics Data System (ADS)
Mialon, Bruno; Khrabrov, Alex; Khelil, Saloua Ben; Huebner, Andreas; Da Ronch, Andrea; Badcock, Ken; Cavagna, Luca; Eliasson, Peter; Zhang, Mengmeng; Ricci, Sergio; Jouhaud, Jean-Christophe; Rogé, Gilbert; Hitzel, Stephan; Lahuta, Martin
2011-11-01
The dynamic derivatives are widely used in linear aerodynamic models in order to determine the flying qualities of an aircraft: the ability to predict them reliably, quickly and sufficiently early in the design process is vital in order to avoid late and costly component redesigns. This paper describes experimental and computational research dealing with the determination of dynamic derivatives carried out within the FP6 European project SimSAC. Numerical and experimental results are compared for two aircraft configurations: a generic civil transport aircraft, wing-fuselage-tail configuration called the DLR-F12 and a generic Transonic CRuiser, which is a canard configuration. Static and dynamic wind tunnel tests have been carried out for both configurations and are briefly described within this paper. The data generated for both the DLR-F12 and TCR configurations include force and pressure coefficients obtained during small amplitude pitch, roll and yaw oscillations while the data for the TCR configuration also include large amplitude oscillations, in order to investigate the dynamic effects on nonlinear aerodynamic characteristics. In addition, dynamic derivatives have been determined for both configurations with a large panel of tools, from linear aerodynamic (Vortex Lattice Methods) to CFD. This work confirms that an increase in fidelity level enables the dynamic derivatives to be calculated more accurately. Linear aerodynamics tools are shown to give satisfactory results but are very sensitive to the geometry/mesh input data. Although all the quasi-steady CFD approaches give comparable results (robustness) for steady dynamic derivatives, they do not allow the prediction of unsteady components for the dynamic derivatives (angular derivatives with respect to time): this can be done with either a fully unsteady approach i.e. with a time-marching scheme or with frequency domain solvers, both of which provide comparable results for the DLR-F12 test case. As far as
Adde, Lars; Helbostad, Jorunn; Jensenius, Alexander R; Langaas, Mette; Støen, Ragnhild
2013-08-01
This study evaluates the role of postterm age at assessment and the use of one or two video recordings for the detection of fidgety movements (FMs) and prediction of cerebral palsy (CP) using computer vision software. Recordings between 9 and 17 weeks postterm age from 52 preterm and term infants (24 boys, 28 girls; 26 born preterm) were used. Recordings were analyzed using computer vision software. Movement variables, derived from differences between subsequent video frames, were used for quantitative analysis. Sensitivities, specificities, and area under curve were estimated for the first and second recording, or a mean of both. FMs were classified based on the Prechtl approach of general movement assessment. CP status was reported at 2 years. Nine children developed CP of whom all recordings had absent FMs. The mean variability of the centroid of motion (CSD) from two recordings was more accurate than using only one recording, and identified all children who were diagnosed with CP at 2 years. Age at assessment did not influence the detection of FMs or prediction of CP. The accuracy of computer vision techniques in identifying FMs and predicting CP based on two recordings should be confirmed in future studies. PMID:23343036
NASA Astrophysics Data System (ADS)
Ansari, R.; Mirnezhad, M.; Sahmani, S.
2015-04-01
Molecular mechanics theory has been widely used to investigate the mechanical properties of nanostructures analytically. However, there is a limited number of research in which molecular mechanics model is utilized to predict the elastic properties of boron nitride nanotubes (BNNTs). In the current study, the mechanical properties of chiral single-walled BNNTs are predicted analytically based on an accurate molecular mechanics model. For this purpose, based upon the density functional theory (DFT) within the framework of the generalized gradient approximation (GGA), the exchange correlation of Perdew-Burke-Ernzerhof is adopted to evaluate force constants used in the molecular mechanics model. Afterwards, based on the principle of molecular mechanics, explicit expressions are given to calculate surface Young's modulus and Poisson's ratio of the single-walled BNNTs for different values of tube diameter and types of chirality. Moreover, the values of surface Young's modulus, Poisson's ratio and bending stiffness of boron nitride sheets are obtained via the DFT as byproducts. The results predicted by the present model are in reasonable agreement with those reported by other models in the literature.
NASA Astrophysics Data System (ADS)
Hansen-Goos, Hendrik
2016-04-01
We derive an analytical equation of state for the hard-sphere fluid that is within 0.01% of computer simulations for the whole range of the stable fluid phase. In contrast, the commonly used Carnahan-Starling equation of state deviates by up to 0.3% from simulations. The derivation uses the functional form of the isothermal compressibility from the Percus-Yevick closure of the Ornstein-Zernike relation as a starting point. Two additional degrees of freedom are introduced, which are constrained by requiring the equation of state to (i) recover the exact fourth virial coefficient B4 and (ii) involve only integer coefficients on the level of the ideal gas, while providing best possible agreement with the numerical result for B5. Virial coefficients B6 to B10 obtained from the equation of state are within 0.5% of numerical computations, and coefficients B11 and B12 are within the error of numerical results. We conjecture that even higher virial coefficients are reliably predicted.
Hansen-Goos, Hendrik
2016-04-28
We derive an analytical equation of state for the hard-sphere fluid that is within 0.01% of computer simulations for the whole range of the stable fluid phase. In contrast, the commonly used Carnahan-Starling equation of state deviates by up to 0.3% from simulations. The derivation uses the functional form of the isothermal compressibility from the Percus-Yevick closure of the Ornstein-Zernike relation as a starting point. Two additional degrees of freedom are introduced, which are constrained by requiring the equation of state to (i) recover the exact fourth virial coefficient B4 and (ii) involve only integer coefficients on the level of the ideal gas, while providing best possible agreement with the numerical result for B5. Virial coefficients B6 to B10 obtained from the equation of state are within 0.5% of numerical computations, and coefficients B11 and B12 are within the error of numerical results. We conjecture that even higher virial coefficients are reliably predicted. PMID:27131556
The assimilation of hyperspectral satellite radiances in Global Numerical Weather Prediction
NASA Astrophysics Data System (ADS)
Jung, James Alan
Hyperspectral infrared radiance data present opportunities for significant improvements in data assimilation and Numerical Weather Prediction (NWP). The increase in spectral resolution available from the Atmospheric Infrared Sounder (AIRS) sensor, for example, will make it possible to improve the accuracy of temperature and moisture fields. Improved accuracy of the NWP analyses and forecasts should result. In this thesis we incorporate these hyperspectral data, using new assimilation methods, into the National Centers for Environmental Prediction's (NCEP) operational Global Data Assimilation System/Global Forecast System (GDAS/GFS) and investigate their impact on the weather analysis and forecasts. The spatial and spectral resolution of AIRS data used by NWP centers was initially based on theoretical calculations. Synthetic data were used to determine channel selection and spatial density for real time data assimilation. Several problems were previously not fully addressed. These areas include: cloud contamination, surface related issues, dust, and temperature inversions. In this study, several improvements were made to the methods used for assimilation. Spatial resolution was increased to examine every field of view, instead of one in nine or eighteen fields of view. Improved selection criteria were developed to find the best profile for assimilation from a larger sample. New cloud and inversion tests were used to help identify the best profiles to be assimilated in the analysis. The spectral resolution was also increased from 152 to 251 channels. The channels added were mainly near the surface, in the water vapor absorption band, and in the shortwave region. The GFS was run at or near operational resolution and contained all observations available to the operational system. For each experiment the operational version of the GFS was used during that time. The use of full spatial and enhanced spectral resolution data resulted in the first demonstration of
NASA Astrophysics Data System (ADS)
Pike, A.; Danner, E.; Lindley, S.; Melton, F. S.; Nemani, R. R.; Hashimoto, H.; Rajagopalan, B.; Caldwell, R. J.
2009-12-01
In the Central Valley of California, stream temperature is a critical indicator of habitat quality for endangered salmonid species and affects re-licensing of major water projects and dam operations worth billions of dollars. However, many water resource-related decisions in regulated rivers rely upon models using a daily-to-monthly mean temperature standard. Furthermore, current water temperature models are limited by the lack of spatially detailed meteorological forecasts. To address this issue, we utilize the coupled TOPS-WRF (Terrestrial Observation and Prediction System - Weather Research and Forecasting) framework—a high-resolution (15min, 1km) assimilation of satellite-derived meteorological observations and numerical weather forecasts— to improve the spatial and temporal resolution of stream temperature predictions. In this study, we developed a high-resolution mechanistic 1-dimensional stream temperature model (sub-hourly time step, sub-kilometer spatial resolution) for the Upper Sacramento River in northern California. The model uses a heat budget approach to calculate the rate of heat transfer to/from the river. Inputs for the heat budget formulation are atmospheric variables provided by the TOPS-WRF model. The hydrodynamics of the river (flow velocity and channel geometry) are characterized using densely-spaced channel cross-sections and flow data. Water temperatures are calculated by considering the hydrologic and thermal characteristics of the river and solving the advection-diffusion equation in a mixed Eulerian-Lagrangian framework. Modeled hindcasted temperatures for a test period (May - November 2008) substantially improve upon the existing daily-to-monthly mean temperature standards. Modeled values closely approximate both the magnitude and the phase of measured water temperatures. Furthermore, our model results reveal important longitudinal patterns in diel temperature variation that are unique to regulated rivers, and may be critical to
A hybrid numerical prediction scheme for solar radiation estimation in un-gauged catchments.
NASA Astrophysics Data System (ADS)
Shamim, M. A.; Bray, M.; Ishak, A. M.; Remesan, R.; Han, D.
2009-09-01
The importance of solar radiation on earth's surface is depicted in its wide range of applications in the fields of meteorology, agricultural sciences, engineering, hydrology, crop water requirements, climatic changes and energy assessment. It is quite random in nature as it has to go through different processes of assimilation and dispersion while on its way to earth. Compared to other meteorological parameters, solar radiation is quite infrequently measured, for example, the worldwide ratio of stations collecting solar radiation to those collecting temperature is 1:500 (Badescu, 2008). Researchers, therefore, have to rely on indirect techniques of estimation that include nonlinear models, artificial intelligence (e.g. neural networks), remote sensing and numerical weather predictions (NWP). This study proposes a hybrid numerical prediction scheme for solar radiation estimation in un-gauged catchments. It uses the PSU/NCAR's Mesoscale Modelling system (MM5) (Grell et al., 1995) to parameterise the cloud effect on extraterrestrial radiation by dividing the atmosphere into four layers of very high (6-12 km), high (3-6 km), medium (1.5-3) and low (0-1.5) altitudes from earth. It is believed that various cloud forms exist within each of these layers. An hourly time series of upper air pressure and relative humidity data sets corresponding to all of these layers is determined for the Brue catchment, southwest UK, using MM5. Cloud Index (CI) was then determined using (Yang and Koike, 2002): 1 p?bi [ (Rh - Rh )] ci =------- max 0.0,---------cri dp pbi - ptipti (1- Rhcri) where, pbi and pti represent the air pressure at the top and bottom of each layer and Rhcri is the critical value of relative humidity at which a certain cloud type is formed. Output from a global clear sky solar radiation model (MRM v-5) (Kambezidis and Psiloglu, 2008) is used along with meteorological datasets of temperature and precipitation and astronomical information. The analysis is aided by the
Ertan, H.B.
1999-09-01
For prediction of static and dynamic performance of doubly-salient motors, it is essential to know their flux linkage-position-excitation characteristics and also the static torque characteristics. At the design stage determination of these characteristics presents difficulties because of highly nonlinear behavior of the magnetic circuit. It is possible to use numerical field solution of the complete motor to obtain this information. This, however, requires expertise on a professional program and may be expensive if used to search for the best design. This paper shows that a reduced model can be used to obtain the desired information accurately. It is also shown that in fact obtaining field solutions just for a pair of teeth is enough for accurately predicting the flux linkage and torque characteristics of a motor. The approach introduced here makes possible searching for an optimum design (even on a PC) for maximizing average torque or reducing noise and vibration problems, since the effort for producing the model and computation time are greatly reduced.
NASA Astrophysics Data System (ADS)
Limaye, A. B. S.; Lamb, M. P.
2014-12-01
Terraces eroded into sediment (cut-fill) and bedrock (strath) preserve a geomorphic record of river activity. River terraces are often thought to form when a river switches from a period of low vertical incision rates and valley widening to high vertical incision rates and terrace abandonment. Consequently, terraces are frequently interpreted to reflect landscape response to changing external drivers, including tectonics, sea-level, and most commonly, climate. In contrast, unsteady lateral migration in meandering rivers may generate river terraces even under constant vertical incision and without changes in external forcing. To explore this latter mechanism, we use a numerical model and an automated terrace detection algorithm to simulate landscape evolution by a vertically incising, meandering river and isolate the age and geometric fingerprints of intrinsically generated river terraces. Simulations indicate that terraces form for a wide range of lateral and vertical incision rates, and the time interval between unique terrace levels is limited by a characteristic timescale for relief generation. Surprisingly, intrinsically generated terraces are commonly paired, an attribute that is thought to be diagnostic of climate change. For low ratios of vertical-to-lateral erosion rates, modeled terraces are longitudinally extensive and typically dip toward the valley center, and terrace slope is proportional to the ratio of vertical to lateral erosion. Evolving, spatial differences in bank strength between bedrock and sediment reduce terrace formation frequency and length, and can explain sub-linear terrace margins at valley boundaries. Comparison of model predictions to natural river terraces indicates that terrace length is the most reliable indicator of terrace formation by pulses of vertical incision, and may contain the imprint of past climate change on landscapes.
NASA Astrophysics Data System (ADS)
Shrestha, D. L.; Robertson, D. E.; Wang, Q. J.; Pagano, T. C.; Hapuarachchi, H. A. P.
2013-05-01
The quality of precipitation forecasts from four Numerical Weather Prediction (NWP) models is evaluated over the Ovens catchment in Southeast Australia. Precipitation forecasts are compared with observed precipitation at point and catchment scales and at different temporal resolutions. The four models evaluated are the Australian Community Climate Earth-System Simulator (ACCESS) including ACCESS-G with a 80 km resolution, ACCESS-R 37.5 km, ACCESS-A 12 km, and ACCESS-VT 5 km. The skill of the NWP precipitation forecasts varies considerably between rain gauging stations. In general, high spatial resolution (ACCESS-A and ACCESS-VT) and regional (ACCESS-R) NWP models overestimate precipitation in dry, low elevation areas and underestimate in wet, high elevation areas. The global model (ACCESS-G) consistently underestimates the precipitation at all stations and the bias increases with station elevation. The skill varies with forecast lead time and, in general, it decreases with the increasing lead time. When evaluated at finer spatial and temporal resolution (e.g. 5 km, hourly), the precipitation forecasts appear to have very little skill. There is moderate skill at short lead times when the forecasts are averaged up to daily and/or catchment scale. The precipitation forecasts fail to produce a diurnal cycle shown in observed precipitation. Significant sampling uncertainty in the skill scores suggests that more data are required to get a reliable evaluation of the forecasts. The non-smooth decay of skill with forecast lead time can be attributed to diurnal cycle in the observation and sampling uncertainty. Future work is planned to assess the benefits of using the NWP rainfall forecasts for short-term streamflow forecasting. Our findings here suggest that it is necessary to remove the systematic biases in rainfall forecasts, particularly those from low resolution models, before the rainfall forecasts can be used for streamflow forecasting.
NASA Astrophysics Data System (ADS)
Shrestha, D. L.; Robertson, D. E.; Wang, Q. J.; Pagano, T. C.; Hapuarachchi, P.
2012-11-01
The quality of precipitation forecasts from four Numerical Weather Prediction (NWP) models is evaluated over the Ovens catchment in southeast Australia. Precipitation forecasts are compared with observed precipitation at point and catchment scales and at different temporal resolutions. The four models evaluated are the Australian Community Climate Earth-System Simulator (ACCESS) including ACCESS-G with a 80 km resolution, ACCESS-R 37.5 km, ACCESS-A 12 km, and ACCESS-VT 5 km. The high spatial resolution NWP models (ACCESS-A and ACCESS-VT) appear to be relatively free of bias (i.e. <30%) for 24 h total precipitation forecasts. The low resolution models (ACCESS-R and ACCESS-G) have widespread systematic biases as large as 70%. When evaluated at finer spatial and temporal resolution (e.g. 5 km, hourly) against station observations, the precipitation forecasts appear to have very little skill. There is moderate skill at short lead times when the forecasts are averaged up to daily and/or catchment scale. The skill decreases with increasing lead times and the global model ACCESS-G does not have significant skill beyond 7 days. The precipitation forecasts fail to produce a diurnal cycle shown in observed precipitation. Significant sampling uncertainty in the skill scores suggests that more data are required to get a reliable evaluation of the forecasts. Future work is planned to assess the benefits of using the NWP rainfall forecasts for short-term streamflow forecasting. Our findings here suggest that it is necessary to remove the systematic biases in rainfall forecasts, particularly those from low resolution models, before the rainfall forecasts can be used for streamflow forecasting.
NASA Astrophysics Data System (ADS)
Kaufmann, P.; Schubiger, F.; Binder, P.
The Swiss Model, a hydrostatic numerical weather prediction model, has been used at MeteoSwiss for operational forecasting at the meso-beta scale (mesh-size 14 km) from 1994 until 2001. The quality of the quantitative precipitation forecasts is evaluated for the eight years of operation. The seasonal precipitation over Switzerland and its dependence on altitude is examined for both model forecasts and observations using the Swiss rain gauge network sampling daily precipitation at over 400 stations for verification. The mean diurnal cycle of precipitation is verified against the automatic surface observation network on the basis of hourly recordings. In winter, there is no diurnal forcing of precipitation and the modelled precipitation agrees with the observed values. In summer, the convection in the model starts too early, overestimates the amount of precipitation and is too short-lived. Skill scores calculated for six-hourly precipitation sums show a constant level of performance over the model life cycle. Dry and wet seasons influence the model performance more than the model changes during its operational period. The comprehensive verification of the model precipitation is complemented by the discussion of a number of heavy rain events investigated during the RAPHAEL project. The sensitivities to a number of model components are illustrated, namely the driving boundary fields, the internal partitioning of parameterised and grid-scale precipitation, the advection scheme and the vertical resolution. While a small impact of the advection scheme had to be expected, the increasing overprediction of rain with increasing vertical resolution in the RAPHAEL case studies was larger than previously thought. The frequent update of the boundary conditions enhances the positioning of the rain in the model.
Evaluation of numerical weather predictions performed in the context of the project DAPHNE
NASA Astrophysics Data System (ADS)
Tegoulias, Ioannis; Pytharoulis, Ioannis; Bampzelis, Dimitris; Karacostas, Theodore
2014-05-01
The region of Thessaly in central Greece is one of the main areas of agricultural production in Greece. Severe weather phenomena affect the agricultural production in this region with adverse effects for farmers and the national economy. For this reason the project DAPHNE aims at tackling the problem of drought by means of weather modification through the development of the necessary tools to support the application of a rainfall enhancement program. In the present study the numerical weather prediction system WRF-ARW is used, in order to assess its ability to represent extreme weather phenomena in the region of Thessaly. WRF is integrated in three domains covering Europe, Eastern Mediterranean and Central-Northern Greece (Thessaly and a large part of Macedonia) using telescoping nesting with grid spacing of 15km, 5km and 1.667km, respectively. The cases examined span throughout the transitional and warm period (April to September) of the years 2008 to 2013, including days with thunderstorm activity. Model results are evaluated against all available surface observations and radar products, taking into account the spatial characteristics and intensity of the storms. Preliminary results indicate a good level of agreement between the simulated and observed fields as far as the standard parameters (such as temperature, humidity and precipitation) are concerned. Moreover, the model generally exhibits a potential to represent the occurrence of the convective activity, but not its exact spatiotemporal characteristics. Acknowledgements This research work has been co-financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007-2013)
Technology Transfer Automated Retrieval System (TEKTRAN)
Numerical weather prediction (NWP) models developed by various weather centers produce estimates of the soil temperature state. In this study in situ data collected over the state of Oklahoma is used to assess and compare three NWP surface (soil) temperature products. These are 1) the integrated for...
NASA Technical Reports Server (NTRS)
Atlas, R.
1981-01-01
Descriptive results from a study of cyclone evolution in the DST-6 forecast case from 0000 GMT 19 February 1976 are presented. The effects of satellite data, orography and diabatic processes on the numerical prediction of cyclone development, and displacement are assessed.
Mohaghegh, S.; Balan, B.; Ameri, S.
1995-12-31
The ultimate test for any technique that bears the claim of permeability prediction from well log data, is accurate and verifiable prediction of permeability for wells from which only the well log data is available. So far all the available models and techniques have been tried on data that includes both well logs and the corresponding permeability values. This approach at best is nothing more than linear or nonlinear curve fitting. The objective of this paper is to test the capability of the most promising of these techniques in independent (where corresponding permeability values are not available or have not been used in development of the model) prediction of permeability in a heterogeneous formation. These techniques are {open_quotes}Multiple Regression{close_quotes} and {open_quotes}Virtual Measurements using Artificial Neural Networks.{close_quotes} For the purposes of this study several wells from a heterogeneous formation in West Virginia were selected. Well log data and corresponding permeability values for these wells were available. The techniques were applied to the remaining data and a permeability model for the field was developed. The model was then applied to the well that was separated from the rest of the data earlier and the results were compared. This approach will test the generalization power of each technique. The result will show that although Multiple Regression provides acceptable results for wells that were used during model development, (good curve fitting,) it lacks a consistent generalization capability, meaning that it does not perform as well with data it has not been exposed to (the data from well that has been put aside). On the other hand, Virtual Measurement technique provides a steady generalization power. This technique is able to perform the permeability prediction task even for the entire wells with no prior exposure to their permeability profile.
NASA Astrophysics Data System (ADS)
Yost, Charles
Although often hard to correctly forecast, mesoscale convective systems (MCSs) are responsible for a majority of warm-season, localized extreme rain events. This study investigates displacement errors often observed by forecasters and researchers in the Global Forecast System (GFS) and the North American Mesoscale (NAM) models, in addition to the European Centre for Medium Range Weather Forecasts (ECMWF) and the 4-km convection allowing NSSL-WRF models. Using archived radar data and Stage IV precipitation data from April to August of 2009 to 2011, MCSs were recorded and sorted into unique six-hour intervals. The locations of these MCSs were compared to the associated predicted precipitation field in all models using the Method for Object-Based Diagnostic Evaluation (MODE) tool, produced by the Developmental Testbed Center and verified through manual analysis. A northward bias exists in the location of the forecasts in all lead times of the GFS, NAM, and ECMWF models. The MODE tool found that 74%, 68%, and 65% of the forecasts were too far to the north of the observed rainfall in the GFS, NAM and ECMWF models respectively. The higher-resolution NSSL-WRF model produced a near neutral location forecast error with 52% of the cases too far to the south. The GFS model consistently moved the MCSs too quickly with 65% of the cases located to the east of the observed MCS. The mean forecast displacement error from the GFS and NAM were on average 266 km and 249 km, respectively, while the ECMWF and NSSL-WRF produced a much lower average of 179 km and 158 km. A case study of the Dubuque, IA MCS on 28 July 2011 was analyzed to identify the root cause of this bias. This MCS shattered several rainfall records and required over 50 people to be rescued from mobile home parks from around the area. This devastating MCS, which was a classic Training Line/Adjoining Stratiform archetype, had numerous northward-biased forecasts from all models, which are examined here. As common with
NASA Astrophysics Data System (ADS)
Hrubý, Jan
2012-04-01
Mathematical modeling of the non-equilibrium condensing transonic steam flow in the complex 3D geometry of a steam turbine is a demanding problem both concerning the physical concepts and the required computational power. Available accurate formulations of steam properties IAPWS-95 and IAPWS-IF97 require much computation time. For this reason, the modelers often accept the unrealistic ideal-gas behavior. Here we present a computation scheme based on a piecewise, thermodynamically consistent representation of the IAPWS-95 formulation. Density and internal energy are chosen as independent variables to avoid variable transformations and iterations. On the contrary to the previous Tabular Taylor Series Expansion Method, the pressure and temperature are continuous functions of the independent variables, which is a desirable property for the solution of the differential equations of the mass, energy, and momentum conservation for both phases.
Nakatsuji, Hiroshi
2012-09-18
Just as Newtonian law governs classical physics, the Schrödinger equation (SE) and the relativistic Dirac equation (DE) rule the world of chemistry. So, if we can solve these equations accurately, we can use computation to predict chemistry precisely. However, for approximately 80 years after the discovery of these equations, chemists believed that they could not solve SE and DE for atoms and molecules that included many electrons. This Account reviews ideas developed over the past decade to further the goal of predictive quantum chemistry. Between 2000 and 2005, I discovered a general method of solving the SE and DE accurately. As a first inspiration, I formulated the structure of the exact wave function of the SE in a compact mathematical form. The explicit inclusion of the exact wave function's structure within the variational space allows for the calculation of the exact wave function as