Science.gov

Sample records for accurate numerical prediction

  1. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  2. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979

  3. Fast and accurate numerical method for predicting gas chromatography retention time.

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-08-01

    Predictive modeling for gas chromatography compound retention depends on the retention factor (ki) and on the flow of the mobile phase. Thus, different approaches for determining an analyte ki in column chromatography have been developed. The main one is based on the thermodynamic properties of the component and on the characteristics of the stationary phase. These models can be used to estimate the parameters and to optimize the programming of temperatures, in gas chromatography, for the separation of compounds. Different authors have proposed the use of numerical methods for solving these models, but these methods demand greater computational time. Hence, a new method for solving the predictive modeling of analyte retention time is presented. This algorithm is an alternative to traditional methods because it transforms its attainments into root determination problems within defined intervals. The proposed approach allows for tr calculation, with accuracy determined by the user of the methods, and significant reductions in computational time; it can also be used to evaluate the performance of other prediction methods.

  4. Numerical predictions in acoustics

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1992-01-01

    Computational Aeroacoustics (CAA) involves the calculation of the sound produced by a flow as well as the underlying flowfield itself from first principles. This paper describes the numerical challenges of CAA and recent research efforts to overcome these challenges. In addition, it includes the benefits of CAA in removing restrictions of linearity, single frequency, constant parameters, low Mach numbers, etc. found in standard acoustic analyses as well as means for evaluating the validity of these numerical approaches. Finally, numerous applications of CAA to both classical as well as modern problems of concern to the aerospace industry are presented.

  5. Numerical predictions in acoustics

    NASA Astrophysics Data System (ADS)

    Hardin, Jay C.

    Computational Aeroacoustics (CAA) involves the calculation of the sound produced by a flow as well as the underlying flowfield itself from first principles. This paper describes the numerical challenges of CAA and recent research efforts to overcome these challenges. In addition, it includes the benefits of CAA in removing restrictions of linearity, single frequency, constant parameters, low Mach numbers, etc. found in standard acoustic analyses as well as means for evaluating the validity of these numerical approaches. Finally, numerous applications of CAA to both classical as well as modern problems of concern to the aerospace industry are presented.

  6. Hounsfield unit density accurately predicts ESWL success.

    PubMed

    Magnuson, William J; Tomera, Kevin M; Lance, Raymond S

    2005-01-01

    Extracorporeal shockwave lithotripsy (ESWL) is a commonly used non-invasive treatment for urolithiasis. Helical CT scans provide much better and detailed imaging of the patient with urolithiasis including the ability to measure density of urinary stones. In this study we tested the hypothesis that density of urinary calculi as measured by CT can predict successful ESWL treatment. 198 patients were treated at Alaska Urological Associates with ESWL between January 2002 and April 2004. Of these 101 met study inclusion with accessible CT scans and stones ranging from 5-15 mm. Follow-up imaging demonstrated stone freedom in 74.2%. The overall mean Houndsfield density value for stone-free compared to residual stone groups were significantly different ( 93.61 vs 122.80 p < 0.0001). We determined by receiver operator curve (ROC) that HDV of 93 or less carries a 90% or better chance of stone freedom following ESWL for upper tract calculi between 5-15mm.

  7. Fast and accurate predictions of covalent bonds in chemical space.

    PubMed

    Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole

    2016-05-01

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  8. Fast and accurate predictions of covalent bonds in chemical space

    NASA Astrophysics Data System (ADS)

    Chang, K. Y. Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole

    2016-05-01

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (˜1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H 2+ . Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  9. Fast and accurate predictions of covalent bonds in chemical space.

    PubMed

    Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole

    2016-05-01

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  10. Evaluation of wave runup predictions from numerical and parametric models

    USGS Publications Warehouse

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  11. Numerical evolution of multiple black holes with accurate initial data

    SciTech Connect

    Galaviz, Pablo; Bruegmann, Bernd; Cao Zhoujian

    2010-07-15

    We present numerical evolutions of three equal-mass black holes using the moving puncture approach. We calculate puncture initial data for three black holes solving the constraint equations by means of a high-order multigrid elliptic solver. Using these initial data, we show the results for three black hole evolutions with sixth-order waveform convergence. We compare results obtained with the BAM and AMSS-NCKU codes with previous results. The approximate analytic solution to the Hamiltonian constraint used in previous simulations of three black holes leads to different dynamics and waveforms. We present some numerical experiments showing the evolution of four black holes and the resulting gravitational waveform.

  12. Fast and Accurate Learning When Making Discrete Numerical Estimates.

    PubMed

    Sanborn, Adam N; Beierholm, Ulrik R

    2016-04-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  13. Fast and Accurate Learning When Making Discrete Numerical Estimates.

    PubMed

    Sanborn, Adam N; Beierholm, Ulrik R

    2016-04-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates.

  14. Fast and Accurate Learning When Making Discrete Numerical Estimates

    PubMed Central

    Sanborn, Adam N.; Beierholm, Ulrik R.

    2016-01-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  15. Numerical ability predicts mortgage default.

    PubMed

    Gerardi, Kristopher; Goette, Lorenz; Meier, Stephan

    2013-07-01

    Unprecedented levels of US subprime mortgage defaults precipitated a severe global financial crisis in late 2008, plunging much of the industrialized world into a deep recession. However, the fundamental reasons for why US mortgages defaulted at such spectacular rates remain largely unknown. This paper presents empirical evidence showing that the ability to perform basic mathematical calculations is negatively associated with the propensity to default on one's mortgage. We measure several aspects of financial literacy and cognitive ability in a survey of subprime mortgage borrowers who took out loans in 2006 and 2007, and match them to objective, detailed administrative data on mortgage characteristics and payment histories. The relationship between numerical ability and mortgage default is robust to controlling for a broad set of sociodemographic variables, and is not driven by other aspects of cognitive ability. We find no support for the hypothesis that numerical ability impacts mortgage outcomes through the choice of the mortgage contract. Rather, our results suggest that individuals with limited numerical ability default on their mortgage due to behavior unrelated to the initial choice of their mortgage.

  16. Numerical ability predicts mortgage default.

    PubMed

    Gerardi, Kristopher; Goette, Lorenz; Meier, Stephan

    2013-07-01

    Unprecedented levels of US subprime mortgage defaults precipitated a severe global financial crisis in late 2008, plunging much of the industrialized world into a deep recession. However, the fundamental reasons for why US mortgages defaulted at such spectacular rates remain largely unknown. This paper presents empirical evidence showing that the ability to perform basic mathematical calculations is negatively associated with the propensity to default on one's mortgage. We measure several aspects of financial literacy and cognitive ability in a survey of subprime mortgage borrowers who took out loans in 2006 and 2007, and match them to objective, detailed administrative data on mortgage characteristics and payment histories. The relationship between numerical ability and mortgage default is robust to controlling for a broad set of sociodemographic variables, and is not driven by other aspects of cognitive ability. We find no support for the hypothesis that numerical ability impacts mortgage outcomes through the choice of the mortgage contract. Rather, our results suggest that individuals with limited numerical ability default on their mortgage due to behavior unrelated to the initial choice of their mortgage. PMID:23798401

  17. Numerical ability predicts mortgage default

    PubMed Central

    Gerardi, Kristopher; Goette, Lorenz; Meier, Stephan

    2013-01-01

    Unprecedented levels of US subprime mortgage defaults precipitated a severe global financial crisis in late 2008, plunging much of the industrialized world into a deep recession. However, the fundamental reasons for why US mortgages defaulted at such spectacular rates remain largely unknown. This paper presents empirical evidence showing that the ability to perform basic mathematical calculations is negatively associated with the propensity to default on one’s mortgage. We measure several aspects of financial literacy and cognitive ability in a survey of subprime mortgage borrowers who took out loans in 2006 and 2007, and match them to objective, detailed administrative data on mortgage characteristics and payment histories. The relationship between numerical ability and mortgage default is robust to controlling for a broad set of sociodemographic variables, and is not driven by other aspects of cognitive ability. We find no support for the hypothesis that numerical ability impacts mortgage outcomes through the choice of the mortgage contract. Rather, our results suggest that individuals with limited numerical ability default on their mortgage due to behavior unrelated to the initial choice of their mortgage. PMID:23798401

  18. On the Accurate Prediction of CME Arrival At the Earth

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Hess, Phillip

    2016-07-01

    We will discuss relevant issues regarding the accurate prediction of CME arrival at the Earth, from both observational and theoretical points of view. In particular, we clarify the importance of separating the study of CME ejecta from the ejecta-driven shock in interplanetary CMEs (ICMEs). For a number of CME-ICME events well observed by SOHO/LASCO, STEREO-A and STEREO-B, we carry out the 3-D measurements by superimposing geometries onto both the ejecta and sheath separately. These measurements are then used to constrain a Drag-Based Model, which is improved through a modification of including height dependence of the drag coefficient into the model. Combining all these factors allows us to create predictions for both fronts at 1 AU and compare with actual in-situ observations. We show an ability to predict the sheath arrival with an average error of under 4 hours, with an RMS error of about 1.5 hours. For the CME ejecta, the error is less than two hours with an RMS error within an hour. Through using the best observations of CMEs, we show the power of our method in accurately predicting CME arrival times. The limitation and implications of our accurate prediction method will be discussed.

  19. Towards numerical prediction of cavitation erosion

    PubMed Central

    Fivel, Marc; Franc, Jean-Pierre; Chandra Roy, Samir

    2015-01-01

    This paper is intended to provide a potential basis for a numerical prediction of cavitation erosion damage. The proposed method can be divided into two steps. The first step consists in determining the loading conditions due to cavitation bubble collapses. It is shown that individual pits observed on highly polished metallic samples exposed to cavitation for a relatively small time can be considered as the signature of bubble collapse. By combining pitting tests with an inverse finite-element modelling (FEM) of the material response to a representative impact load, loading conditions can be derived for each individual bubble collapse in terms of stress amplitude (in gigapascals) and radial extent (in micrometres). This step requires characterizing as accurately as possible the properties of the material exposed to cavitation. This characterization should include the effect of strain rate, which is known to be high in cavitation erosion (typically of the order of several thousands s−1). Nanoindentation techniques as well as compressive tests at high strain rate using, for example, a split Hopkinson pressure bar test system may be used. The second step consists in developing an FEM approach to simulate the material response to the repetitive impact loads determined in step 1. This includes a detailed analysis of the hardening process (isotropic versus kinematic) in order to properly account for fatigue as well as the development of a suitable model of material damage and failure to account for mass loss. Although the whole method is not yet fully operational, promising results are presented that show that such a numerical method might be, in the long term, an alternative to correlative techniques used so far for cavitation erosion prediction. PMID:26442139

  20. Towards numerical prediction of cavitation erosion.

    PubMed

    Fivel, Marc; Franc, Jean-Pierre; Chandra Roy, Samir

    2015-10-01

    This paper is intended to provide a potential basis for a numerical prediction of cavitation erosion damage. The proposed method can be divided into two steps. The first step consists in determining the loading conditions due to cavitation bubble collapses. It is shown that individual pits observed on highly polished metallic samples exposed to cavitation for a relatively small time can be considered as the signature of bubble collapse. By combining pitting tests with an inverse finite-element modelling (FEM) of the material response to a representative impact load, loading conditions can be derived for each individual bubble collapse in terms of stress amplitude (in gigapascals) and radial extent (in micrometres). This step requires characterizing as accurately as possible the properties of the material exposed to cavitation. This characterization should include the effect of strain rate, which is known to be high in cavitation erosion (typically of the order of several thousands s(-1)). Nanoindentation techniques as well as compressive tests at high strain rate using, for example, a split Hopkinson pressure bar test system may be used. The second step consists in developing an FEM approach to simulate the material response to the repetitive impact loads determined in step 1. This includes a detailed analysis of the hardening process (isotropic versus kinematic) in order to properly account for fatigue as well as the development of a suitable model of material damage and failure to account for mass loss. Although the whole method is not yet fully operational, promising results are presented that show that such a numerical method might be, in the long term, an alternative to correlative techniques used so far for cavitation erosion prediction. PMID:26442139

  1. Accurate numerical simulation of short fiber optical parametric amplifiers.

    PubMed

    Marhic, M E; Rieznik, A A; Kalogerakis, G; Braimiotis, C; Fragnito, H L; Kazovsky, L G

    2008-03-17

    We improve the accuracy of numerical simulations for short fiber optical parametric amplifiers (OPAs). Instead of using the usual coarse-step method, we adopt a model for birefringence and dispersion which uses fine-step variations of the parameters. We also improve the split-step Fourier method by exactly treating the nonlinear ellipse rotation terms. We find that results obtained this way for two-pump OPAs can be significantly different from those obtained by using the usual coarse-step fiber model, and/or neglecting ellipse rotation terms.

  2. Accurate numerical solution of compressible, linear stability equations

    NASA Technical Reports Server (NTRS)

    Malik, M. R.; Chuang, S.; Hussaini, M. Y.

    1982-01-01

    The present investigation is concerned with a fourth order accurate finite difference method and its application to the study of the temporal and spatial stability of the three-dimensional compressible boundary layer flow on a swept wing. This method belongs to the class of compact two-point difference schemes discussed by White (1974) and Keller (1974). The method was apparently first used for solving the two-dimensional boundary layer equations. Attention is given to the governing equations, the solution technique, and the search for eigenvalues. A general purpose subroutine is employed for solving a block tridiagonal system of equations. The computer time can be reduced significantly by exploiting the special structure of two matrices.

  3. Accurate numerical solutions for elastic-plastic models. [LMFBR

    SciTech Connect

    Schreyer, H. L.; Kulak, R. F.; Kramer, J. M.

    1980-03-01

    The accuracy of two integration algorithms is studied for the common engineering condition of a von Mises, isotropic hardening model under plane stress. Errors in stress predictions for given total strain increments are expressed with contour plots of two parameters: an angle in the pi plane and the difference between the exact and computed yield-surface radii. The two methods are the tangent-predictor/radial-return approach and the elastic-predictor/radial-corrector algorithm originally developed by Mendelson. The accuracy of a combined tangent-predictor/radial-corrector algorithm is also investigated.

  4. The quiet revolution of numerical weather prediction.

    PubMed

    Bauer, Peter; Thorpe, Alan; Brunet, Gilbert

    2015-09-01

    Advances in numerical weather prediction represent a quiet revolution because they have resulted from a steady accumulation of scientific knowledge and technological advances over many years that, with only a few exceptions, have not been associated with the aura of fundamental physics breakthroughs. Nonetheless, the impact of numerical weather prediction is among the greatest of any area of physical science. As a computational problem, global weather prediction is comparable to the simulation of the human brain and of the evolution of the early Universe, and it is performed every day at major operational centres across the world.

  5. Passive samplers accurately predict PAH levels in resident crayfish.

    PubMed

    Paulik, L Blair; Smith, Brian W; Bergmann, Alan J; Sower, Greg J; Forsberg, Norman D; Teeguarden, Justin G; Anderson, Kim A

    2016-02-15

    Contamination of resident aquatic organisms is a major concern for environmental risk assessors. However, collecting organisms to estimate risk is often prohibitively time and resource-intensive. Passive sampling accurately estimates resident organism contamination, and it saves time and resources. This study used low density polyethylene (LDPE) passive water samplers to predict polycyclic aromatic hydrocarbon (PAH) levels in signal crayfish, Pacifastacus leniusculus. Resident crayfish were collected at 5 sites within and outside of the Portland Harbor Superfund Megasite (PHSM) in the Willamette River in Portland, Oregon. LDPE deployment was spatially and temporally paired with crayfish collection. Crayfish visceral and tail tissue, as well as water-deployed LDPE, were extracted and analyzed for 62 PAHs using GC-MS/MS. Freely-dissolved concentrations (Cfree) of PAHs in water were calculated from concentrations in LDPE. Carcinogenic risks were estimated for all crayfish tissues, using benzo[a]pyrene equivalent concentrations (BaPeq). ∑PAH were 5-20 times higher in viscera than in tails, and ∑BaPeq were 6-70 times higher in viscera than in tails. Eating only tail tissue of crayfish would therefore significantly reduce carcinogenic risk compared to also eating viscera. Additionally, PAH levels in crayfish were compared to levels in crayfish collected 10 years earlier. PAH levels in crayfish were higher upriver of the PHSM and unchanged within the PHSM after the 10-year period. Finally, a linear regression model predicted levels of 34 PAHs in crayfish viscera with an associated R-squared value of 0.52 (and a correlation coefficient of 0.72), using only the Cfree PAHs in water. On average, the model predicted PAH concentrations in crayfish tissue within a factor of 2.4 ± 1.8 of measured concentrations. This affirms that passive water sampling accurately estimates PAH contamination in crayfish. Furthermore, the strong predictive ability of this simple model suggests

  6. Towards an accurate understanding of UHMWPE visco-dynamic behaviour for numerical modelling of implants.

    PubMed

    Quinci, Federico; Dressler, Matthew; Strickland, Anthony M; Limbert, Georges

    2014-04-01

    Considerable progress has been made in understanding implant wear and developing numerical models to predict wear for new orthopaedic devices. However any model of wear could be improved through a more accurate representation of the biomaterial mechanics, including time-varying dynamic and inelastic behaviour such as viscosity and plastic deformation. In particular, most computational models of wear of UHMWPE implement a time-invariant version of Archard's law that links the volume of worn material to the contact pressure between the metal implant and the polymeric tibial insert. During in-vivo conditions, however, the contact area is a time-varying quantity and is therefore dependent upon the dynamic deformation response of the material. From this observation one can conclude that creep deformations of UHMWPE may be very important to consider when conducting computational wear analyses, in stark contrast to what can be found in the literature. In this study, different numerical modelling techniques are compared with experimental creep testing on a unicondylar knee replacement system in a physiologically representative context. Linear elastic, plastic and time-varying visco-dynamic models are benchmarked using literature data to predict contact deformations, pressures and areas. The aim of this study is to elucidate the contributions of viscoelastic and plastic effects on these surface quantities. It is concluded that creep deformations have a significant effect on the contact pressure measured (experiment) and calculated (computational models) at the surface of the UHMWPE unicondylar insert. The use of a purely elastoplastic constitutive model for UHMWPE lead to compressive deformations of the insert which are much smaller than those predicted by a creep-capturing viscoelastic model (and those measured experimentally). This shows again the importance of including creep behaviour into a constitutive model in order to predict the right level of surface deformation

  7. Towards an accurate understanding of UHMWPE visco-dynamic behaviour for numerical modelling of implants.

    PubMed

    Quinci, Federico; Dressler, Matthew; Strickland, Anthony M; Limbert, Georges

    2014-04-01

    Considerable progress has been made in understanding implant wear and developing numerical models to predict wear for new orthopaedic devices. However any model of wear could be improved through a more accurate representation of the biomaterial mechanics, including time-varying dynamic and inelastic behaviour such as viscosity and plastic deformation. In particular, most computational models of wear of UHMWPE implement a time-invariant version of Archard's law that links the volume of worn material to the contact pressure between the metal implant and the polymeric tibial insert. During in-vivo conditions, however, the contact area is a time-varying quantity and is therefore dependent upon the dynamic deformation response of the material. From this observation one can conclude that creep deformations of UHMWPE may be very important to consider when conducting computational wear analyses, in stark contrast to what can be found in the literature. In this study, different numerical modelling techniques are compared with experimental creep testing on a unicondylar knee replacement system in a physiologically representative context. Linear elastic, plastic and time-varying visco-dynamic models are benchmarked using literature data to predict contact deformations, pressures and areas. The aim of this study is to elucidate the contributions of viscoelastic and plastic effects on these surface quantities. It is concluded that creep deformations have a significant effect on the contact pressure measured (experiment) and calculated (computational models) at the surface of the UHMWPE unicondylar insert. The use of a purely elastoplastic constitutive model for UHMWPE lead to compressive deformations of the insert which are much smaller than those predicted by a creep-capturing viscoelastic model (and those measured experimentally). This shows again the importance of including creep behaviour into a constitutive model in order to predict the right level of surface deformation

  8. A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction

    NASA Technical Reports Server (NTRS)

    Bockelie, Michael J.; Eiseman, Peter R.

    1990-01-01

    A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.

  9. Plant diversity accurately predicts insect diversity in two tropical landscapes.

    PubMed

    Zhang, Kai; Lin, Siliang; Ji, Yinqiu; Yang, Chenxue; Wang, Xiaoyang; Yang, Chunyan; Wang, Hesheng; Jiang, Haisheng; Harrison, Rhett D; Yu, Douglas W

    2016-09-01

    Plant diversity surely determines arthropod diversity, but only moderate correlations between arthropod and plant species richness had been observed until Basset et al. (Science, 338, 2012 and 1481) finally undertook an unprecedentedly comprehensive sampling of a tropical forest and demonstrated that plant species richness could indeed accurately predict arthropod species richness. We now require a high-throughput pipeline to operationalize this result so that we can (i) test competing explanations for tropical arthropod megadiversity, (ii) improve estimates of global eukaryotic species diversity, and (iii) use plant and arthropod communities as efficient proxies for each other, thus improving the efficiency of conservation planning and of detecting forest degradation and recovery. We therefore applied metabarcoding to Malaise-trap samples across two tropical landscapes in China. We demonstrate that plant species richness can accurately predict arthropod (mostly insect) species richness and that plant and insect community compositions are highly correlated, even in landscapes that are large, heterogeneous and anthropogenically modified. Finally, we review how metabarcoding makes feasible highly replicated tests of the major competing explanations for tropical megadiversity. PMID:27474399

  10. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    DOE PAGES

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; et al

    2013-03-07

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of chargedmore » peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.« less

  11. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    SciTech Connect

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; Rose, Kristie L.; Tabb, David L.

    2013-03-07

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of charged peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.

  12. Passive samplers accurately predict PAH levels in resident crayfish.

    PubMed

    Paulik, L Blair; Smith, Brian W; Bergmann, Alan J; Sower, Greg J; Forsberg, Norman D; Teeguarden, Justin G; Anderson, Kim A

    2016-02-15

    Contamination of resident aquatic organisms is a major concern for environmental risk assessors. However, collecting organisms to estimate risk is often prohibitively time and resource-intensive. Passive sampling accurately estimates resident organism contamination, and it saves time and resources. This study used low density polyethylene (LDPE) passive water samplers to predict polycyclic aromatic hydrocarbon (PAH) levels in signal crayfish, Pacifastacus leniusculus. Resident crayfish were collected at 5 sites within and outside of the Portland Harbor Superfund Megasite (PHSM) in the Willamette River in Portland, Oregon. LDPE deployment was spatially and temporally paired with crayfish collection. Crayfish visceral and tail tissue, as well as water-deployed LDPE, were extracted and analyzed for 62 PAHs using GC-MS/MS. Freely-dissolved concentrations (Cfree) of PAHs in water were calculated from concentrations in LDPE. Carcinogenic risks were estimated for all crayfish tissues, using benzo[a]pyrene equivalent concentrations (BaPeq). ∑PAH were 5-20 times higher in viscera than in tails, and ∑BaPeq were 6-70 times higher in viscera than in tails. Eating only tail tissue of crayfish would therefore significantly reduce carcinogenic risk compared to also eating viscera. Additionally, PAH levels in crayfish were compared to levels in crayfish collected 10 years earlier. PAH levels in crayfish were higher upriver of the PHSM and unchanged within the PHSM after the 10-year period. Finally, a linear regression model predicted levels of 34 PAHs in crayfish viscera with an associated R-squared value of 0.52 (and a correlation coefficient of 0.72), using only the Cfree PAHs in water. On average, the model predicted PAH concentrations in crayfish tissue within a factor of 2.4 ± 1.8 of measured concentrations. This affirms that passive water sampling accurately estimates PAH contamination in crayfish. Furthermore, the strong predictive ability of this simple model suggests

  13. Mouse models of human AML accurately predict chemotherapy response

    PubMed Central

    Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.

    2009-01-01

    The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691

  14. Numerical simulation and prediction of implosion phenomena

    NASA Astrophysics Data System (ADS)

    Chen, J.; Dietrich, R. A.

    1992-10-01

    Using gas-liquid two phase flow theory, a modified mathematical model based on the computational fluid dynamics method SIMPLE (Semi Implicit Method for Pressure Linked Equations) is introduced to investigate implosion phenomena in high pressure chambers. For a characteristic physical model, the numerical results are obtained and analyzed, without referring to experimental data. Extensive calculations to predict the highest pressure on the chamber wall are performed under varying conditions such as the implosion pressure, the dimensions of the test models, and the height of the upper air layer. The efficiency of different highest pressure reduction methods is analyzed. The results of these simulations and predictions are shown in a series of plots.

  15. Numerical weather prediction model tuning via ensemble prediction system

    NASA Astrophysics Data System (ADS)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  16. An Overview of Practical Applications of Protein Disorder Prediction and Drive for Faster, More Accurate Predictions

    PubMed Central

    Deng, Xin; Gumm, Jordan; Karki, Suman; Eickholt, Jesse; Cheng, Jianlin

    2015-01-01

    Protein disordered regions are segments of a protein chain that do not adopt a stable structure. Thus far, a variety of protein disorder prediction methods have been developed and have been widely used, not only in traditional bioinformatics domains, including protein structure prediction, protein structure determination and function annotation, but also in many other biomedical fields. The relationship between intrinsically-disordered proteins and some human diseases has played a significant role in disorder prediction in disease identification and epidemiological investigations. Disordered proteins can also serve as potential targets for drug discovery with an emphasis on the disordered-to-ordered transition in the disordered binding regions, and this has led to substantial research in drug discovery or design based on protein disordered region prediction. Furthermore, protein disorder prediction has also been applied to healthcare by predicting the disease risk of mutations in patients and studying the mechanistic basis of diseases. As the applications of disorder prediction increase, so too does the need to make quick and accurate predictions. To fill this need, we also present a new approach to predict protein residue disorder using wide sequence windows that is applicable on the genomic scale. PMID:26198229

  17. PredictSNP: robust and accurate consensus classifier for prediction of disease-related mutations.

    PubMed

    Bendl, Jaroslav; Stourac, Jan; Salanda, Ondrej; Pavelka, Antonin; Wieben, Eric D; Zendulka, Jaroslav; Brezovsky, Jan; Damborsky, Jiri

    2014-01-01

    Single nucleotide variants represent a prevalent form of genetic variation. Mutations in the coding regions are frequently associated with the development of various genetic diseases. Computational tools for the prediction of the effects of mutations on protein function are very important for analysis of single nucleotide variants and their prioritization for experimental characterization. Many computational tools are already widely employed for this purpose. Unfortunately, their comparison and further improvement is hindered by large overlaps between the training datasets and benchmark datasets, which lead to biased and overly optimistic reported performances. In this study, we have constructed three independent datasets by removing all duplicities, inconsistencies and mutations previously used in the training of evaluated tools. The benchmark dataset containing over 43,000 mutations was employed for the unbiased evaluation of eight established prediction tools: MAPP, nsSNPAnalyzer, PANTHER, PhD-SNP, PolyPhen-1, PolyPhen-2, SIFT and SNAP. The six best performing tools were combined into a consensus classifier PredictSNP, resulting into significantly improved prediction performance, and at the same time returned results for all mutations, confirming that consensus prediction represents an accurate and robust alternative to the predictions delivered by individual tools. A user-friendly web interface enables easy access to all eight prediction tools, the consensus classifier PredictSNP and annotations from the Protein Mutant Database and the UniProt database. The web server and the datasets are freely available to the academic community at http://loschmidt.chemi.muni.cz/predictsnp.

  18. Accurately Predicting Complex Reaction Kinetics from First Principles

    NASA Astrophysics Data System (ADS)

    Green, William

    Many important systems contain a multitude of reactive chemical species, some of which react on a timescale faster than collisional thermalization, i.e. they never achieve a Boltzmann energy distribution. Usually it is impossible to fully elucidate the processes by experiments alone. Here we report recent progress toward predicting the time-evolving composition of these systems a priori: how unexpected reactions can be discovered on the computer, how reaction rates are computed from first principles, and how the many individual reactions are efficiently combined into a predictive simulation for the whole system. Some experimental tests of the a priori predictions are also presented.

  19. Does more accurate exposure prediction necessarily improve health effect estimates?

    PubMed

    Szpiro, Adam A; Paciorek, Christopher J; Sheppard, Lianne

    2011-09-01

    A unique challenge in air pollution cohort studies and similar applications in environmental epidemiology is that exposure is not measured directly at subjects' locations. Instead, pollution data from monitoring stations at some distance from the study subjects are used to predict exposures, and these predicted exposures are used to estimate the health effect parameter of interest. It is usually assumed that minimizing the error in predicting the true exposure will improve health effect estimation. We show in a simulation study that this is not always the case. We interpret our results in light of recently developed statistical theory for measurement error, and we discuss implications for the design and analysis of epidemiologic research.

  20. Accurate Critical Stress Intensity Factor Griffith Crack Theory Measurements by Numerical Techniques

    PubMed Central

    Petersen, Richard C.

    2014-01-01

    Critical stress intensity factor (KIc) has been an approximation for fracture toughness using only load-cell measurements. However, artificial man-made cracks several orders of magnitude longer and wider than natural flaws have required a correction factor term (Y) that can be up to about 3 times the recorded experimental value [1-3]. In fact, over 30 years ago a National Academy of Sciences advisory board stated that empirical KIc testing was of serious concern and further requested that an accurate bulk fracture toughness method be found [4]. Now that fracture toughness can be calculated accurately by numerical integration from the load/deflection curve as resilience, work of fracture (WOF) and strain energy release (SIc) [5, 6], KIc appears to be unnecessary. However, the large body of previous KIc experimental test results found in the literature offer the opportunity for continued meta analysis with other more practical and accurate fracture toughness results using energy methods and numerical integration. Therefore, KIc is derived from the classical Griffith Crack Theory [6] to include SIc as a more accurate term for strain energy release rate (𝒢Ic), along with crack surface energy (γ), crack length (a), modulus (E), applied stress (σ), Y, crack-tip plastic zone defect region (rp) and yield strength (σys) that can all be determined from load and deflection data. Polymer matrix discontinuous quartz fiber-reinforced composites to accentuate toughness differences were prepared for flexural mechanical testing comprising of 3 mm fibers at different volume percentages from 0-54.0 vol% and at 28.2 vol% with different fiber lengths from 0.0-6.0 mm. Results provided a new correction factor and regression analyses between several numerical integration fracture toughness test methods to support KIc results. Further, bulk KIc accurate experimental values are compared with empirical test results found in literature. Also, several fracture toughness mechanisms

  1. Is Three-Dimensional Soft Tissue Prediction by Software Accurate?

    PubMed

    Nam, Ki-Uk; Hong, Jongrak

    2015-11-01

    The authors assessed whether virtual surgery, performed with a soft tissue prediction program, could correctly simulate the actual surgical outcome, focusing on soft tissue movement. Preoperative and postoperative computed tomography (CT) data for 29 patients, who had undergone orthognathic surgery, were obtained and analyzed using the Simplant Pro software. The program made a predicted soft tissue image (A) based on presurgical CT data. After the operation, we obtained actual postoperative CT data and an actual soft tissue image (B) was generated. Finally, the 2 images (A and B) were superimposed and analyzed differences between the A and B. Results were grouped in 2 classes: absolute values and vector values. In the absolute values, the left mouth corner was the most significant error point (2.36 mm). The right mouth corner (2.28 mm), labrale inferius (2.08 mm), and the pogonion (2.03 mm) also had significant errors. In vector values, prediction of the right-left side had a left-sided tendency, the superior-inferior had a superior tendency, and the anterior-posterior showed an anterior tendency. As a result, with this program, the position of points tended to be located more left, anterior, and superior than the "real" situation. There is a need to improve the prediction accuracy for soft tissue images. Such software is particularly valuable in predicting craniofacial soft tissues landmarks, such as the pronasale. With this software, landmark positions were most inaccurate in terms of anterior-posterior predictions.

  2. Towards Accurate Ab Initio Predictions of the Spectrum of Methane

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.; Kwak, Dochan (Technical Monitor)

    2001-01-01

    We have carried out extensive ab initio calculations of the electronic structure of methane, and these results are used to compute vibrational energy levels. We include basis set extrapolations, core-valence correlation, relativistic effects, and Born- Oppenheimer breakdown terms in our calculations. Our ab initio predictions of the lowest lying levels are superb.

  3. Standardized EEG interpretation accurately predicts prognosis after cardiac arrest

    PubMed Central

    Rossetti, Andrea O.; van Rootselaar, Anne-Fleur; Wesenberg Kjaer, Troels; Horn, Janneke; Ullén, Susann; Friberg, Hans; Nielsen, Niklas; Rosén, Ingmar; Åneman, Anders; Erlinge, David; Gasche, Yvan; Hassager, Christian; Hovdenes, Jan; Kjaergaard, Jesper; Kuiper, Michael; Pellis, Tommaso; Stammet, Pascal; Wanscher, Michael; Wetterslev, Jørn; Wise, Matt P.; Cronberg, Tobias

    2016-01-01

    Objective: To identify reliable predictors of outcome in comatose patients after cardiac arrest using a single routine EEG and standardized interpretation according to the terminology proposed by the American Clinical Neurophysiology Society. Methods: In this cohort study, 4 EEG specialists, blinded to outcome, evaluated prospectively recorded EEGs in the Target Temperature Management trial (TTM trial) that randomized patients to 33°C vs 36°C. Routine EEG was performed in patients still comatose after rewarming. EEGs were classified into highly malignant (suppression, suppression with periodic discharges, burst-suppression), malignant (periodic or rhythmic patterns, pathological or nonreactive background), and benign EEG (absence of malignant features). Poor outcome was defined as best Cerebral Performance Category score 3–5 until 180 days. Results: Eight TTM sites randomized 202 patients. EEGs were recorded in 103 patients at a median 77 hours after cardiac arrest; 37% had a highly malignant EEG and all had a poor outcome (specificity 100%, sensitivity 50%). Any malignant EEG feature had a low specificity to predict poor prognosis (48%) but if 2 malignant EEG features were present specificity increased to 96% (p < 0.001). Specificity and sensitivity were not significantly affected by targeted temperature or sedation. A benign EEG was found in 1% of the patients with a poor outcome. Conclusions: Highly malignant EEG after rewarming reliably predicted poor outcome in half of patients without false predictions. An isolated finding of a single malignant feature did not predict poor outcome whereas a benign EEG was highly predictive of a good outcome. PMID:26865516

  4. PredictSNP: Robust and Accurate Consensus Classifier for Prediction of Disease-Related Mutations

    PubMed Central

    Bendl, Jaroslav; Stourac, Jan; Salanda, Ondrej; Pavelka, Antonin; Wieben, Eric D.; Zendulka, Jaroslav; Brezovsky, Jan; Damborsky, Jiri

    2014-01-01

    Single nucleotide variants represent a prevalent form of genetic variation. Mutations in the coding regions are frequently associated with the development of various genetic diseases. Computational tools for the prediction of the effects of mutations on protein function are very important for analysis of single nucleotide variants and their prioritization for experimental characterization. Many computational tools are already widely employed for this purpose. Unfortunately, their comparison and further improvement is hindered by large overlaps between the training datasets and benchmark datasets, which lead to biased and overly optimistic reported performances. In this study, we have constructed three independent datasets by removing all duplicities, inconsistencies and mutations previously used in the training of evaluated tools. The benchmark dataset containing over 43,000 mutations was employed for the unbiased evaluation of eight established prediction tools: MAPP, nsSNPAnalyzer, PANTHER, PhD-SNP, PolyPhen-1, PolyPhen-2, SIFT and SNAP. The six best performing tools were combined into a consensus classifier PredictSNP, resulting into significantly improved prediction performance, and at the same time returned results for all mutations, confirming that consensus prediction represents an accurate and robust alternative to the predictions delivered by individual tools. A user-friendly web interface enables easy access to all eight prediction tools, the consensus classifier PredictSNP and annotations from the Protein Mutant Database and the UniProt database. The web server and the datasets are freely available to the academic community at http://loschmidt.chemi.muni.cz/predictsnp. PMID:24453961

  5. Accurate contact predictions using covariation techniques and machine learning

    PubMed Central

    Kosciolek, Tomasz

    2015-01-01

    ABSTRACT Here we present the results of residue–residue contact predictions achieved in CASP11 by the CONSIP2 server, which is based around our MetaPSICOV contact prediction method. On a set of 40 target domains with a median family size of around 40 effective sequences, our server achieved an average top‐L/5 long‐range contact precision of 27%. MetaPSICOV method bases on a combination of classical contact prediction features, enhanced with three distinct covariation methods embedded in a two‐stage neural network predictor. Some unique features of our approach are (1) the tuning between the classical and covariation features depending on the depth of the input alignment and (2) a hybrid approach to generate deepest possible multiple‐sequence alignments by combining jackHMMer and HHblits. We discuss the CONSIP2 pipeline, our results and show that where the method underperformed, the major factor was relying on a fixed set of parameters for the initial sequence alignments and not attempting to perform domain splitting as a preprocessing step. Proteins 2016; 84(Suppl 1):145–151. © 2015 The Authors. Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc. PMID:26205532

  6. How Accurately Can We Predict Eclipses for Algol? (Poster abstract)

    NASA Astrophysics Data System (ADS)

    Turner, D.

    2016-06-01

    (Abstract only) beta Persei, or Algol, is a very well known eclipsing binary system consisting of a late B-type dwarf that is regularly eclipsed by a GK subgiant every 2.867 days. Eclipses, which last about 8 hours, are regular enough that predictions for times of minima are published in various places, Sky & Telescope magazine and The Observer's Handbook, for example. But eclipse minimum lasts for less than a half hour, whereas subtle mistakes in the current ephemeris for the star can result in predictions that are off by a few hours or more. The Algol system is fairly complex, with the Algol A and Algol B eclipsing system also orbited by Algol C with an orbital period of nearly 2 years. Added to that are complex long-term O-C variations with a periodicity of almost two centuries that, although suggested by Hoffmeister to be spurious, fit the type of light travel time variations expected for a fourth star also belonging to the system. The AB sub-system also undergoes mass transfer events that add complexities to its O-C behavior. Is it actually possible to predict precise times of eclipse minima for Algol months in advance given such complications, or is it better to encourage ongoing observations of the star so that O-C variations can be tracked in real time?

  7. Recommendations for Achieving Accurate Numerical Simulation of Tip Clearance Flows in Transonic Compressor Rotors

    NASA Technical Reports Server (NTRS)

    VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R,; Hathaway, Michael D.; Okiishi, Theodore H.

    2000-01-01

    The tip clearance flows of transonic compressor rotors are important because they have a significant impact on rotor and stage performance. While numerical simulations of these flows are quite sophisticated. they are seldom verified through rigorous comparisons of numerical and measured data because these kinds of measurements are rare in the detail necessary to be useful in high-speed machines. In this paper we compare measured tip clearance flow details (e.g. trajectory and radial extent) with corresponding data obtained from a numerical simulation. Recommendations for achieving accurate numerical simulation of tip clearance flows are presented based on this comparison. Laser Doppler Velocimeter (LDV) measurements acquired in a transonic compressor rotor, NASA Rotor 35, are used. The tip clearance flow field of this transonic rotor was simulated using a Navier-Stokes turbomachinery solver that incorporates an advanced k-epsilon turbulence model derived for flows that are not in local equilibrium. Comparison between measured and simulated results indicates that simulation accuracy is primarily dependent upon the ability of the numerical code to resolve important details of a wall-bounded shear layer formed by the relative motion between the over-tip leakage flow and the shroud wall. A simple method is presented for determining the strength of this shear layer.

  8. Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting

    PubMed Central

    Khan, Tarik A.; Friedensohn, Simon; de Vries, Arthur R. Gorter; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T.

    2016-01-01

    High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion—the intraclonal diversity index—which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology. PMID:26998518

  9. Physical and Numerical Model Studies of Cross-flow Turbines Towards Accurate Parameterization in Array Simulations

    NASA Astrophysics Data System (ADS)

    Wosnik, M.; Bachant, P.

    2014-12-01

    Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of

  10. Towards numerically accurate many-body perturbation theory: Short-range correlation effects

    SciTech Connect

    Gulans, Andris

    2014-10-28

    The example of the uniform electron gas is used for showing that the short-range electron correlation is difficult to handle numerically, while it noticeably contributes to the self-energy. Nonetheless, in condensed-matter applications studied with advanced methods, such as the GW and random-phase approximations, it is common to neglect contributions due to high-momentum (large q) transfers. Then, the short-range correlation is poorly described, which leads to inaccurate correlation energies and quasiparticle spectra. To circumvent this problem, an accurate extrapolation scheme is proposed. It is based on an analytical derivation for the uniform electron gas presented in this paper, and it provides an explanation why accurate GW quasiparticle spectra are easy to obtain for some compounds and very difficult for others.

  11. Final Report for "Accurate Numerical Models of the Secondary Electron Yield from Grazing-incidence Collisions".

    SciTech Connect

    Seth A Veitzer

    2008-10-21

    Effects of stray electrons are a main factor limiting performance of many accelerators. Because heavy-ion fusion (HIF) accelerators will operate in regimes of higher current and with walls much closer to the beam than accelerators operating today, stray electrons might have a large, detrimental effect on the performance of an HIF accelerator. A primary source of stray electrons is electrons generated when halo ions strike the beam pipe walls. There is some research on these types of secondary electrons for the HIF community to draw upon, but this work is missing one crucial ingredient: the effect of grazing incidence. The overall goal of this project was to develop the numerical tools necessary to accurately model the effect of grazing incidence on the behavior of halo ions in a HIF accelerator, and further, to provide accurate models of heavy ion stopping powers with applications to ICF, WDM, and HEDP experiments.

  12. Accurate predictions for the production of vaporized water

    SciTech Connect

    Morin, E.; Montel, F.

    1995-12-31

    The production of water vaporized in the gas phase is controlled by the local conditions around the wellbore. The pressure gradient applied to the formation creates a sharp increase of the molar water content in the hydrocarbon phase approaching the well; this leads to a drop in the pore water saturation around the wellbore. The extent of the dehydrated zone which is formed is the key controlling the bottom-hole content of vaporized water. The maximum water content in the hydrocarbon phase at a given pressure, temperature and salinity is corrected by capillarity or adsorption phenomena depending on the actual water saturation. Describing the mass transfer of the water between the hydrocarbon phases and the aqueous phase into the tubing gives a clear idea of vaporization effects on the formation of scales. Field example are presented for gas fields with temperatures ranging between 140{degrees}C and 180{degrees}C, where water vaporization effects are significant. Conditions for salt plugging in the tubing are predicted.

  13. Change in BMI accurately predicted by social exposure to acquaintances.

    PubMed

    Oloritun, Rahman O; Ouarda, Taha B M J; Moturu, Sai; Madan, Anmol; Pentland, Alex Sandy; Khayal, Inas

    2013-01-01

    Research has mostly focused on obesity and not on processes of BMI change more generally, although these may be key factors that lead to obesity. Studies have suggested that obesity is affected by social ties. However these studies used survey based data collection techniques that may be biased toward select only close friends and relatives. In this study, mobile phone sensing techniques were used to routinely capture social interaction data in an undergraduate dorm. By automating the capture of social interaction data, the limitations of self-reported social exposure data are avoided. This study attempts to understand and develop a model that best describes the change in BMI using social interaction data. We evaluated a cohort of 42 college students in a co-located university dorm, automatically captured via mobile phones and survey based health-related information. We determined the most predictive variables for change in BMI using the least absolute shrinkage and selection operator (LASSO) method. The selected variables, with gender, healthy diet category, and ability to manage stress, were used to build multiple linear regression models that estimate the effect of exposure and individual factors on change in BMI. We identified the best model using Akaike Information Criterion (AIC) and R(2). This study found a model that explains 68% (p<0.0001) of the variation in change in BMI. The model combined social interaction data, especially from acquaintances, and personal health-related information to explain change in BMI. This is the first study taking into account both interactions with different levels of social interaction and personal health-related information. Social interactions with acquaintances accounted for more than half the variation in change in BMI. This suggests the importance of not only individual health information but also the significance of social interactions with people we are exposed to, even people we may not consider as close friends.

  14. Numerical prediction of freezing fronts in cryosurgery: comparison with experimental results.

    PubMed

    Fortin, André; Belhamadia, Youssef

    2005-08-01

    Recent developments in scientific computing now allow to consider realistic applications of numerical modelling to medicine. In this work, a numerical method is presented for the simulation of phase change occurring in cryosurgery applications. The ultimate goal of these simulations is to accurately predict the freezing front position and the thermal history inside the ice ball which is essential to determine if cancerous cells have been completely destroyed. A semi-phase field formulation including blood flow considerations is employed for the simulations. Numerical results are enhanced by the introduction of an anisotropic remeshing strategy. The numerical procedure is validated by comparing the predictions of the model with experimental results. PMID:16298846

  15. Efficient and accurate numerical methods for the Klein-Gordon-Schroedinger equations

    SciTech Connect

    Bao, Weizhu . E-mail: bao@math.nus.edu.sg; Yang, Li . E-mail: yangli@nus.edu.sg

    2007-08-10

    In this paper, we present efficient, unconditionally stable and accurate numerical methods for approximations of the Klein-Gordon-Schroedinger (KGS) equations with/without damping terms. The key features of our methods are based on: (i) the application of a time-splitting spectral discretization for a Schroedinger-type equation in KGS (ii) the utilization of Fourier pseudospectral discretization for spatial derivatives in the Klein-Gordon equation in KGS (iii) the adoption of solving the ordinary differential equations (ODEs) in phase space analytically under appropriate chosen transmission conditions between different time intervals or applying Crank-Nicolson/leap-frog for linear/nonlinear terms for time derivatives. The numerical methods are either explicit or implicit but can be solved explicitly, unconditionally stable, and of spectral accuracy in space and second-order accuracy in time. Moreover, they are time reversible and time transverse invariant when there is no damping terms in KGS, conserve (or keep the same decay rate of) the wave energy as that in KGS without (or with a linear) damping term, keep the same dynamics of the mean value of the meson field, and give exact results for the plane-wave solution. Extensive numerical tests are presented to confirm the above properties of our numerical methods for KGS. Finally, the methods are applied to study solitary-wave collisions in one dimension (1D), as well as dynamics of a 2D problem in KGS.

  16. Sub-kilometer Numerical Weather Prediction in complex urban areas

    NASA Astrophysics Data System (ADS)

    Leroyer, S.; Bélair, S.; Husain, S.; Vionnet, V.

    2013-12-01

    A Sub-kilometer atmospheric modeling system with grid-spacings of 2.5 km, 1 km and 250 m and including urban processes is currently being developed at the Meteorological Service of Canada (MSC) in order to provide more accurate weather forecasts at the city scale. Atmospheric lateral boundary conditions are provided with the 15-km Canadian Regional Deterministic Prediction System (RDPS). Surface physical processes are represented with the Town Energy Balance (TEB) model for the built-up covers and with the Interactions between the Surface, Biosphere, and Atmosphere (ISBA) land surface model for the natural covers. In this study, several research experiments over large metropolitan areas and using observational networks at the urban scale are presented, with a special emphasis on the representation of local atmospheric circulations and their impact on extreme weather forecasting. First, numerical simulations are performed over the Vancouver metropolitan area during a summertime Intense Observing Period (IOP of 14-15 August 2008) of the Environmental Prediction in Canadian Cities (EPiCC) observational network. The influence of the horizontal resolution on the fine-scale representation of the sea-breeze development over the city is highlighted (Leroyer et al., 2013). Then severe storms cases occurring in summertime within the Greater Toronto Area (GTA) are simulated. In view of supporting the 2015 PanAmerican and Para-Pan games to be hold in GTA, a dense observational network has been recently deployed over this region to support model evaluations at the urban and meso scales. In particular, simulations are conducted for the case of 8 July 2013 when exceptional rainfalls were recorded. Leroyer, S., S. Bélair, J. Mailhot, S.Z. Husain, 2013: Sub-kilometer Numerical Weather Prediction in an Urban Coastal Area: A case study over the Vancouver Metropolitan Area, submitted to Journal of Applied Meteorology and Climatology.

  17. A novel numerical technique to obtain an accurate solution to the Thomas-Fermi equation

    NASA Astrophysics Data System (ADS)

    Parand, Kourosh; Yousefi, Hossein; Delkhosh, Mehdi; Ghaderi, Amin

    2016-07-01

    In this paper, a new algorithm based on the fractional order of rational Euler functions (FRE) is introduced to study the Thomas-Fermi (TF) model which is a nonlinear singular ordinary differential equation on a semi-infinite interval. This problem, using the quasilinearization method (QLM), converts to the sequence of linear ordinary differential equations to obtain the solution. For the first time, the rational Euler (RE) and the FRE have been made based on Euler polynomials. In addition, the equation will be solved on a semi-infinite domain without truncating it to a finite domain by taking FRE as basic functions for the collocation method. This method reduces the solution of this problem to the solution of a system of algebraic equations. We demonstrated that the new proposed algorithm is efficient for obtaining the value of y'(0) , y(x) and y'(x) . Comparison with some numerical and analytical solutions shows that the present solution is highly accurate.

  18. Numerical prediction of turbulent oscillating flow and associated heat transfer

    NASA Technical Reports Server (NTRS)

    Koehler, W. J.; Patankar, S. V.; Ibele, W. E.

    1991-01-01

    A crucial point for further development of engines is the optimization of its heat exchangers which operate under oscillatory flow conditions. It has been found that the most important thermodynamic uncertainties in the Stirling engine designs for space power are in the heat transfer between gas and metal in all engine components and in the pressure drop across the heat exchanger components. So far, performance codes cannot predict the power output of a Stirling engine reasonably enough if used for a wide variety of engines. Thus, there is a strong need for better performance codes. However, a performance code is not concerned with the details of the flow. This information must be provided externally. While analytical relationships exist for laminar oscillating flow, there has been hardly any information about transitional and turbulent oscillating flow, which could be introduced into the performance codes. In 1986, a survey by Seume and Simon revealed that most Stirling engine heat exchangers operate in the transitional and turbulent regime. Consequently, research has since focused on the unresolved issue of transitional and turbulent oscillating flow and heat transfer. Since 1988, the University of Minnesota oscillating flow facility has obtained experimental data about transitional and turbulent oscillating flow. However, since the experiments in this field are extremely difficult, lengthy, and expensive, it is advantageous to numerically simulate the flow and heat transfer accurately from first principles. Work done at the University of Minnesota on the development of such a numerical simulation is summarized.

  19. Recommendations for accurate numerical blood flow simulations of stented intracranial aneurysms.

    PubMed

    Janiga, Gábor; Berg, Philipp; Beuing, Oliver; Neugebauer, Mathias; Gasteiger, Rocco; Preim, Bernhard; Rose, Georg; Skalej, Martin; Thévenin, Dominique

    2013-06-01

    The number of scientific publications dealing with stented intracranial aneurysms is rapidly increasing. Powerful computational facilities are now available; an accurate computational modeling of hemodynamics in patient-specific configurations is, however, still being sought. Furthermore, there is still no general agreement on the quantities that should be computed and on the most adequate analysis for intervention support. In this article, the accurate representation of patient geometry is first discussed, involving successive improvements. Concerning the second step, the mesh required for the numerical simulation is especially challenging when deploying a stent with very fine wire structures. Third, the description of the fluid properties is a major challenge. Finally, a founded quantitative analysis of the simulation results is obviously needed to support interventional decisions. In the present work, an attempt has been made to review the most important steps for a high-quality computational fluid dynamics computation of virtually stented intracranial aneurysms. In consequence, this leads to concrete recommendations, whereby the obtained results are not discussed for their medical relevance but for the evaluation of their quality. This investigation might hopefully be helpful for further studies considering stent deployment in patient-specific geometries, in particular regarding the generation of the most appropriate computational model. PMID:23729530

  20. Keeping the edge: an accurate numerical method to solve the stream power law

    NASA Astrophysics Data System (ADS)

    Campforts, B.; Govers, G.

    2015-12-01

    Bedrock rivers set the base level of surrounding hill slopes and mediate the dynamic interplay between mountain building and denudation. The propensity of rivers to preserve pulses of increased tectonic uplift also allows to reconstruct long term uplift histories from longitudinal river profiles. An accurate reconstruction of river profile development at different timescales is therefore essential. Long term river development is typically modeled by means of the stream power law. Under specific conditions this equation can be solved analytically but numerical Finite Difference Methods (FDMs) are most frequently used. Nonetheless, FDMs suffer from numerical smearing, especially at knickpoint zones which are key to understand transient landscapes. Here, we solve the stream power law by means of a Finite Volume Method (FVM) which is Total Variation Diminishing (TVD). Total volume methods are designed to simulate sharp discontinuities making them very suitable to model river incision. In contrast to FDMs, the TVD_FVM is well capable of preserving knickpoints as illustrated for the fast propagating Niagara falls. Moreover, we show that the TVD_FVM performs much better when reconstructing uplift at timescales exceeding 100 Myr, using Eastern Australia as an example. Finally, uncertainty associated with parameter calibration is dramatically reduced when the TVD_FVM is applied. Therefore, the use of a TVD_FVM to understand long term landscape evolution is an important addition to the toolbox at the disposition of geomorphologists.

  1. PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release

    NASA Astrophysics Data System (ADS)

    Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.

    2016-09-01

    The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.

  2. Numerical prediction of flow in slender vortices

    NASA Technical Reports Server (NTRS)

    Reyna, Luis G.; Menne, Stefan

    1988-01-01

    The slender vortex approximation was investigated using the Navier-Stokes equations written in cylindrical coordinates. It is shown that, for free vortices without external pressure gradient, the breakdown length is proportional to the Reynolds number. For free vortices with adverse pressure gradients, the breakdown length is inversely proportional to the value of its gradient. For low Reynolds numbers, the predictions of the simplified system agreed well with the ones obtained from solutions of the full Navier-Stokes equations, whereas for high Reynolds numbers, the flow became quite sensitive to pressure fluctuations; it was found that the failure of the slender vortex equations corresponded to the critical condition as identified by Benjamin (1962) for inviscid flows. The predictions obtained from the approximating system were compared with available experimental results. For low swirl, a good agreement was obtained; for high swirl, on the other hand, upstream effects on the pressure gradient produced by the breakdown bubble caused poor agreement.

  3. Development of a New Model for Accurate Prediction of Cloud Water Deposition on Vegetation

    NASA Astrophysics Data System (ADS)

    Katata, G.; Nagai, H.; Wrzesinsky, T.; Klemm, O.; Eugster, W.; Burkard, R.

    2006-12-01

    Scarcity of water resources in arid and semi-arid areas is of great concern in the light of population growth and food shortages. Several experiments focusing on cloud (fog) water deposition on the land surface suggest that cloud water plays an important role in water resource in such regions. A one-dimensional vegetation model including the process of cloud water deposition on vegetation has been developed to better predict cloud water deposition on the vegetation. New schemes to calculate capture efficiency of leaf, cloud droplet size distribution, and gravitational flux of cloud water were incorporated in the model. Model calculations were compared with the data acquired at the Norway spruce forest at the Waldstein site, Germany. High performance of the model was confirmed by comparisons of calculated net radiation, sensible and latent heat, and cloud water fluxes over the forest with measurements. The present model provided a better prediction of measured turbulent and gravitational fluxes of cloud water over the canopy than the Lovett model, which is a commonly used cloud water deposition model. Detailed calculations of evapotranspiration and of turbulent exchange of heat and water vapor within the canopy and the modifications are necessary for accurate prediction of cloud water deposition. Numerical experiments to examine the dependence of cloud water deposition on the vegetation species (coniferous and broad-leaved trees, flat and cylindrical grasses) and structures (Leaf Area Index (LAI) and canopy height) are performed using the presented model. The results indicate that the differences of leaf shape and size have a large impact on cloud water deposition. Cloud water deposition also varies with the growth of vegetation and seasonal change of LAI. We found that the coniferous trees whose height and LAI are 24 m and 2.0 m2m-2, respectively, produce the largest amount of cloud water deposition in all combinations of vegetation species and structures in the

  4. A numerical solution of Duffing's equations including the prediction of jump phenomena

    NASA Technical Reports Server (NTRS)

    Moyer, E. T., Jr.; Ghasghai-Abdi, E.

    1987-01-01

    Numerical methodology for the solution of Duffing's differential equation is presented. Algorithms for the prediction of multiple equilibrium solutions and jump phenomena are developed. In addition, a filtering algorithm for producing steady state solutions is presented. The problem of a rigidly clamped circular plate subjected to cosinusoidal pressure loading is solved using the developed algorithms (the plate is assumed to be in the geometrically nonlinear range). The results accurately predict regions of solution multiplicity and jump phenomena.

  5. Orbital Advection by Interpolation: A Fast and Accurate Numerical Scheme for Super-Fast MHD Flows

    SciTech Connect

    Johnson, B M; Guan, X; Gammie, F

    2008-04-11

    In numerical models of thin astrophysical disks that use an Eulerian scheme, gas orbits supersonically through a fixed grid. As a result the timestep is sharply limited by the Courant condition. Also, because the mean flow speed with respect to the grid varies with position, the truncation error varies systematically with position. For hydrodynamic (unmagnetized) disks an algorithm called FARGO has been developed that advects the gas along its mean orbit using a separate interpolation substep. This relaxes the constraint imposed by the Courant condition, which now depends only on the peculiar velocity of the gas, and results in a truncation error that is more nearly independent of position. This paper describes a FARGO-like algorithm suitable for evolving magnetized disks. Our method is second order accurate on a smooth flow and preserves {del} {center_dot} B = 0 to machine precision. The main restriction is that B must be discretized on a staggered mesh. We give a detailed description of an implementation of the code and demonstrate that it produces the expected results on linear and nonlinear problems. We also point out how the scheme might be generalized to make the integration of other supersonic/super-fast flows more efficient. Although our scheme reduces the variation of truncation error with position, it does not eliminate it. We show that the residual position dependence leads to characteristic radial variations in the density over long integrations.

  6. Spray combustion experiments and numerical predictions

    NASA Technical Reports Server (NTRS)

    Mularz, Edward J.; Bulzan, Daniel L.; Chen, Kuo-Huey

    1993-01-01

    The next generation of commercial aircraft will include turbofan engines with performance significantly better than those in the current fleet. Control of particulate and gaseous emissions will also be an integral part of the engine design criteria. These performance and emission requirements present a technical challenge for the combustor: control of the fuel and air mixing and control of the local stoichiometry will have to be maintained much more rigorously than with combustors in current production. A better understanding of the flow physics of liquid fuel spray combustion is necessary. This paper describes recent experiments on spray combustion where detailed measurements of the spray characteristics were made, including local drop-size distributions and velocities. Also, an advanced combustor CFD code has been under development and predictions from this code are compared with experimental results. Studies such as these will provide information to the advanced combustor designer on fuel spray quality and mixing effectiveness. Validation of new fast, robust, and efficient CFD codes will also enable the combustor designer to use them as additional design tools for optimization of combustor concepts for the next generation of aircraft engines.

  7. Numerical Prediction of Dust. Chapter 10

    NASA Technical Reports Server (NTRS)

    Benedetti, Angela; Baldasano, J. M.; Basart, S.; Benincasa, F.; Boucher, O.; Brooks, M.; Chen, J. P.; Colarco, P. R.; Gong, S.; Huneeus, N.; Jones, L; Lu, S.; Menut, L.; Mulcahy, J.; Nickovic, S.; Morcrette, J.-J.; Perez, C.; Reid, J. S.; Sekiyama, T. T.; Tanaka, T.; Terradellas, E.; Westphal, D. L.; Zhang, X.-Y.; Zhou, C.-H.

    2013-01-01

    . Scientific observations and results are presented, along with numerous illustrations. This work has an interdisciplinary appeal and will engage scholars in geology, geography, chemistry, meteorology and physics, amongst others with an interest in the Earth system and environmental change.

  8. Numerical Prediction of Signal for Magnetic Flux Leakage Benchmark Task

    NASA Astrophysics Data System (ADS)

    Lunin, V.; Alexeevsky, D.

    2003-03-01

    Numerical results predicted by the finite element method based code are presented. The nonlinear magnetic time-dependent benchmark problem proposed by the World Federation of Nondestructive Evaluation Centers, involves numerical prediction of normal (radial) component of the leaked field in the vicinity of two practically rectangular notches machined on a rotating steel pipe (with known nonlinear magnetic characteristic). One notch is located on external surface of pipe and other is on internal one, and both are oriented axially.

  9. Accurate First-Principles Spectra Predictions for Planetological and Astrophysical Applications at Various T-Conditions

    NASA Astrophysics Data System (ADS)

    Rey, M.; Nikitin, A. V.; Tyuterev, V.

    2014-06-01

    Knowledge of near infrared intensities of rovibrational transitions of polyatomic molecules is essential for the modeling of various planetary atmospheres, brown dwarfs and for other astrophysical applications 1,2,3. For example, to analyze exoplanets, atmospheric models have been developed, thus making the need to provide accurate spectroscopic data. Consequently, the spectral characterization of such planetary objects relies on the necessity of having adequate and reliable molecular data in extreme conditions (temperature, optical path length, pressure). On the other hand, in the modeling of astrophysical opacities, millions of lines are generally involved and the line-by-line extraction is clearly not feasible in laboratory measurements. It is thus suggested that this large amount of data could be interpreted only by reliable theoretical predictions. There exists essentially two theoretical approaches for the computation and prediction of spectra. The first one is based on empirically-fitted effective spectroscopic models. Another way for computing energies, line positions and intensities is based on global variational calculations using ab initio surfaces. They do not yet reach the spectroscopic accuracy stricto sensu but implicitly account for all intramolecular interactions including resonance couplings in a wide spectral range. The final aim of this work is to provide reliable predictions which could be quantitatively accurate with respect to the precision of available observations and as complete as possible. All this thus requires extensive first-principles quantum mechanical calculations essentially based on three necessary ingredients which are (i) accurate intramolecular potential energy surface and dipole moment surface components well-defined in a large range of vibrational displacements and (ii) efficient computational methods combined with suitable choices of coordinates to account for molecular symmetry properties and to achieve a good numerical

  10. Numerical Modelling and Prediction of Erosion Induced by Hydrodynamic Cavitation

    NASA Astrophysics Data System (ADS)

    Peters, A.; Lantermann, U.; el Moctar, O.

    2015-12-01

    The present work aims to predict cavitation erosion using a numerical flow solver together with a new developed erosion model. The erosion model is based on the hypothesis that collapses of single cavitation bubbles near solid boundaries form high velocity microjets, which cause sonic impacts with high pressure amplitudes damaging the surface. The erosion model uses information from a numerical Euler-Euler flow simulation to predict erosion sensitive areas and assess the erosion aggressiveness of the flow. The obtained numerical results were compared to experimental results from tests of an axisymmetric nozzle.

  11. Accurate numerical forward model for optimal retracking of SIRAL2 SAR echoes over open ocean

    NASA Astrophysics Data System (ADS)

    Phalippou, L.; Demeestere, F.

    2011-12-01

    The SAR mode of SIRAL-2 on board Cryosat-2 has been designed to measure primarily sea-ice and continental ice (Wingham et al. 2005). In 2005, K. Raney (KR, 2005) pointed out the improvements brought by SAR altimeter for open ocean. KR results were mostly based on 'rule of thumb' considerations on speckle noise reduction due to the higher PRF and to speckle decorrelation after SAR processing. In 2007, Phalippou and Enjolras (PE,2007) provided the theoretical background for optimal retracking of SAR echoes over ocean with a focus on the forward modelling of the power-waveforms. The accuracies of geophysical parameters (range, significant wave heights, and backscattering coefficient) retrieved from SAR altimeter data were derived accounting for SAR echo shape and speckle noise accurate modelling. The step forward to optimal retracking using numerical forward model (NFM) was also pointed out. NFM of the power waveform avoids analytical approximation, a warranty to minimise the geophysical dependent biases in the retrieval. NFM have been used for many years, in operational meteorology in particular, for retrieving temperature and humidity profiles from IR and microwave radiometers as the radiative transfer function is complex (Eyre, 1989). So far this technique was not used in the field of ocean conventional altimetry as analytical models (e.g. Brown's model for instance) were found to give sufficient accuracy. However, although NFM seems desirable even for conventional nadir altimetry, it becomes inevitable if one wish to process SAR altimeter data as the transfer function is too complex to be approximated by a simple analytical function. This was clearly demonstrated in PE 2007. The paper describes the background to SAR data retracking over open ocean. Since PE 2007 improvements have been brought to the forward model and it is shown that the altimeter on-ground and in flight characterisation (e.g antenna pattern range impulse response, azimuth impulse response

  12. Mind-set and close relationships: when bias leads to (In)accurate predictions.

    PubMed

    Gagné, F M; Lydon, J E

    2001-07-01

    The authors investigated whether mind-set influences the accuracy of relationship predictions. Because people are more biased in their information processing when thinking about implementing an important goal, relationship predictions made in an implemental mind-set were expected to be less accurate than those made in a more impartial deliberative mind-set. In Study 1, open-ended thoughts of students about to leave for university were coded for mind-set. In Study 2, mind-set about a major life goal was assessed using a self-report measure. In Study 3, mind-set was experimentally manipulated. Overall, mind-set interacted with forecasts to predict relationship survival. Forecasts were more accurate in a deliberative mind-set than in an implemental mind-set. This effect was more pronounced for long-term than for short-term relationship survival. Finally, deliberatives were not pessimistic; implementals were unduly optimistic.

  13. Modeling methodology for the accurate and prompt prediction of symptomatic events in chronic diseases.

    PubMed

    Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L

    2016-08-01

    Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. PMID:27260782

  14. SIFTER search: a web server for accurate phylogeny-based protein function prediction.

    PubMed

    Sahraeian, Sayed M; Luo, Kevin R; Brenner, Steven E

    2015-07-01

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. The SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.

  15. Accurate predictions of dielectrophoretic force and torque on particles with strong mutual field, particle, and wall interactions

    NASA Astrophysics Data System (ADS)

    Liu, Qianlong; Reifsnider, Kenneth

    2012-11-01

    The basis of dielectrophoresis (DEP) is the prediction of the force and torque on particles. The classical approach to the prediction is based on the effective moment method, which, however, is an approximate approach, assumes infinitesimal particles. Therefore, it is well-known that for finite-sized particles, the DEP approximation is inaccurate as the mutual field, particle, wall interactions become strong, a situation presently attracting extensive research for practical significant applications. In the present talk, we provide accurate calculations of the force and torque on the particles from first principles, by directly resolving the local geometry and properties and accurately accounting for the mutual interactions for finite-sized particles with both dielectric polarization and conduction in a sinusoidally steady-state electric field. Since the approach has a significant advantage, compared to other numerical methods, to efficiently simulate many closely packed particles, it provides an important, unique, and accurate technique to investigate complex DEP phenomena, for example heterogeneous mixtures containing particle chains, nanoparticle assembly, biological cells, non-spherical effects, etc. This study was supported by the Department of Energy under funding for an EFRC (the HeteroFoaM Center), grant no. DE-SC0001061.

  16. Towards Bridging the Gaps in Holistic Transition Prediction via Numerical Simulations

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan M.; Li, Fei; Duan, Lian; Chang, Chau-Lyan; Carpenter, Mark H.; Streett, Craig L.; Malik, Mujeeb R.

    2013-01-01

    The economic and environmental benefits of laminar flow technology via reduced fuel burn of subsonic and supersonic aircraft cannot be realized without minimizing the uncertainty in drag prediction in general and transition prediction in particular. Transition research under NASA's Aeronautical Sciences Project seeks to develop a validated set of variable fidelity prediction tools with known strengths and limitations, so as to enable "sufficiently" accurate transition prediction and practical transition control for future vehicle concepts. This paper provides a summary of selected research activities targeting the current gaps in high-fidelity transition prediction, specifically those related to the receptivity and laminar breakdown phases of crossflow induced transition in a subsonic swept-wing boundary layer. The results of direct numerical simulations are used to obtain an enhanced understanding of the laminar breakdown region as well as to validate reduced order prediction methods.

  17. A Single Linear Prediction Filter that Accurately Predicts the AL Index

    NASA Astrophysics Data System (ADS)

    McPherron, R. L.; Chu, X.

    2015-12-01

    The AL index is a measure of the strength of the westward electrojet flowing along the auroral oval. It has two components: one from the global DP-2 current system and a second from the DP-1 current that is more localized near midnight. It is generally believed that the index a very poor measure of these currents because of its dependence on the distance of stations from the source of the two currents. In fact over season and solar cycle the coupling strength defined as the steady state ratio of the output AL to the input coupling function varies by a factor of four. There are four factors that lead to this variation. First is the equinoctial effect that modulates coupling strength with peaks (strongest coupling) at the equinoxes. Second is the saturation of the polar cap potential which decreases coupling strength as the strength of the driver increases. Since saturation occurs more frequently at solar maximum we obtain the result that maximum coupling strength occurs at equinox at solar minimum. A third factor is ionospheric conductivity with stronger coupling at summer solstice as compared to winter. The fourth factor is the definition of a solar wind coupling function appropriate to a given index. We have developed an optimum coupling function depending on solar wind speed, density, transverse magnetic field, and IMF clock angle which is better than previous functions. Using this we have determined the seasonal variation of coupling strength and developed an inverse function that modulates the optimum coupling function so that all seasonal variation is removed. In a similar manner we have determined the dependence of coupling strength on solar wind driver strength. The inverse of this function is used to scale a linear prediction filter thus eliminating the dependence on driver strength. Our result is a single linear filter that is adjusted in a nonlinear manner by driver strength and an optimum coupling function that is seasonal modulated. Together this

  18. A review of the kinetic detail required for accurate predictions of normal shock waves

    NASA Technical Reports Server (NTRS)

    Muntz, E. P.; Erwin, Daniel A.; Pham-Van-diep, Gerald C.

    1991-01-01

    Several aspects of the kinetic models used in the collision phase of Monte Carlo direct simulations have been studied. Accurate molecular velocity distribution function predictions require a significantly increased number of computational cells in one maximum slope shock thickness, compared to predictions of macroscopic properties. The shape of the highly repulsive portion of the interatomic potential for argon is not well modeled by conventional interatomic potentials; this portion of the potential controls high Mach number shock thickness predictions, indicating that the specification of the energetic repulsive portion of interatomic or intermolecular potentials must be chosen with care for correct modeling of nonequilibrium flows at high temperatures. It has been shown for inverse power potentials that the assumption of variable hard sphere scattering provides accurate predictions of the macroscopic properties in shock waves, by comparison with simulations in which differential scattering is employed in the collision phase. On the other hand, velocity distribution functions are not well predicted by the variable hard sphere scattering model for softer potentials at higher Mach numbers.

  19. Can phenological models predict tree phenology accurately under climate change conditions?

    NASA Astrophysics Data System (ADS)

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2014-05-01

    The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay

  20. On the very accurate numerical evaluation of the Generalized Fermi-Dirac Integrals

    NASA Astrophysics Data System (ADS)

    Mohankumar, N.; Natarajan, A.

    2016-10-01

    We indicate a new and a very accurate algorithm for the evaluation of the Generalized Fermi-Dirac Integral with a relative error less than 10-20. The method involves Double Exponential, Trapezoidal and Gauss-Legendre quadratures. For the residue correction of the Gauss-Legendre scheme, a simple and precise continued fraction algorithm is used.

  1. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    SciTech Connect

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  2. Spectrally accurate numerical solution of the single-particle Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Batcho, P. F.

    1998-06-01

    We have formulated a three-dimensional fully numerical (i.e., chemical basis-set free) method and applied it to the solution of the single-particle Schrödinger equation. The numerical method combines the rapid ``exponential'' convergence rates of spectral methods with the geometric flexibility of finite-element methods and can be viewed as an extension of the spectral element method. Singularities associated with multicenter systems are efficiently integrated by a Duffy transformation and the discrete operator is formulated by a variational statement. The method is applicable to molecular modeling for quantum chemical calculations on polyatomic systems. The complete system is shown to be efficiently inverted by the preconditioned conjugate gradient method and exponential convergence rates in numerical approximations are demonstrated for suitable benchmark problems including the hydrogenlike orbitals of nitrogen.

  3. Can phenological models predict tree phenology accurately in the future? The unrevealed hurdle of endodormancy break.

    PubMed

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2016-10-01

    The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future. PMID:27272707

  4. Can phenological models predict tree phenology accurately in the future? The unrevealed hurdle of endodormancy break.

    PubMed

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2016-10-01

    The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future.

  5. A simple accurate method to predict time of ponding under variable intensity rainfall

    NASA Astrophysics Data System (ADS)

    Assouline, S.; Selker, J. S.; Parlange, J.-Y.

    2007-03-01

    The prediction of the time to ponding following commencement of rainfall is fundamental to hydrologic prediction of flood, erosion, and infiltration. Most of the studies to date have focused on prediction of ponding resulting from simple rainfall patterns. This approach was suitable to rainfall reported as average values over intervals of up to a day but does not take advantage of knowledge of the complex patterns of actual rainfall now commonly recorded electronically. A straightforward approach to include the instantaneous rainfall record in the prediction of ponding time and excess rainfall using only the infiltration capacity curve is presented. This method is tested against a numerical solution of the Richards equation on the basis of an actual rainfall record. The predicted time to ponding showed mean error ≤7% for a broad range of soils, with and without surface sealing. In contrast, the standard predictions had average errors of 87%, and worst-case errors exceeding a factor of 10. In addition to errors intrinsic in the modeling framework itself, errors that arise from averaging actual rainfall records over reporting intervals were evaluated. Averaging actual rainfall records observed in Israel over periods of as little as 5 min significantly reduced predicted runoff (75% for the sealed sandy loam and 46% for the silty clay loam), while hourly averaging gave complete lack of prediction of ponding in some of the cases.

  6. AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)

    EPA Science Inventory

    Abstract

    A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...

  7. Accurate similarity index based on activity and connectivity of node for link prediction

    NASA Astrophysics Data System (ADS)

    Li, Longjie; Qian, Lvjian; Wang, Xiaoping; Luo, Shishun; Chen, Xiaoyun

    2015-05-01

    Recent years have witnessed the increasing of available network data; however, much of those data is incomplete. Link prediction, which can find the missing links of a network, plays an important role in the research and analysis of complex networks. Based on the assumption that two unconnected nodes which are highly similar are very likely to have an interaction, most of the existing algorithms solve the link prediction problem by computing nodes' similarities. The fundamental requirement of those algorithms is accurate and effective similarity indices. In this paper, we propose a new similarity index, namely similarity based on activity and connectivity (SAC), which performs link prediction more accurately. To compute the similarity between two nodes, this index employs the average activity of these two nodes in their common neighborhood and the connectivities between them and their common neighbors. The higher the average activity is and the stronger the connectivities are, the more similar the two nodes are. The proposed index not only commendably distinguishes the contributions of paths but also incorporates the influence of endpoints. Therefore, it can achieve a better predicting result. To verify the performance of SAC, we conduct experiments on 10 real-world networks. Experimental results demonstrate that SAC outperforms the compared baselines.

  8. Hybrid predictions of railway induced ground vibration using a combination of experimental measurements and numerical modelling

    NASA Astrophysics Data System (ADS)

    Kuo, K. A.; Verbraken, H.; Degrande, G.; Lombaert, G.

    2016-07-01

    Along with the rapid expansion of urban rail networks comes the need for accurate predictions of railway induced vibration levels at grade and in buildings. Current computational methods for making predictions of railway induced ground vibration rely on simplifying modelling assumptions and require detailed parameter inputs, which lead to high levels of uncertainty. It is possible to mitigate against these issues using a combination of field measurements and state-of-the-art numerical methods, known as a hybrid model. In this paper, two hybrid models are developed, based on the use of separate source and propagation terms that are quantified using in situ measurements or modelling results. These models are implemented using term definitions proposed by the Federal Railroad Administration and assessed using the specific illustration of a surface railway. It is shown that the limitations of numerical and empirical methods can be addressed in a hybrid procedure without compromising prediction accuracy.

  9. Towards more accurate wind and solar power prediction by improving NWP model physics

    NASA Astrophysics Data System (ADS)

    Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo

    2014-05-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, economists, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the errors and provide an a priori estimate of remaining uncertainties associated with the large share of weather-dependent power sources. For this purpose it is essential to optimize NWP model forecasts with respect to those prognostic variables which are relevant for wind and solar power plants. An improved weather forecast serves as the basis for a sophisticated power forecasts. Consequently, a well-timed energy trading on the stock market, and electrical grid stability can be maintained. The German Weather Service (DWD) currently is involved with two projects concerning research in the field of renewable energy, namely ORKA*) and EWeLiNE**). Whereas the latter is in collaboration with the Fraunhofer Institute (IWES), the project ORKA is led by energy & meteo systems (emsys). Both cooperate with German transmission system operators. The goal of the projects is to improve wind and photovoltaic (PV) power forecasts by combining optimized NWP and enhanced power forecast models. In this context, the German Weather Service aims to improve its model system, including the ensemble forecasting system, by working on data assimilation, model physics and statistical post processing. This presentation is focused on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. First steps leading to improved physical parameterization schemes within the NWP-model are presented. Wind mast measurements reaching up to 200 m height above ground are used for the estimation of the (NWP) wind forecast error at heights relevant for wind energy plants. One particular problem is the daily cycle in wind speed. The transition from stable stratification during

  10. Accurate prediction of the linear viscoelastic properties of highly entangled mono and bidisperse polymer melts.

    PubMed

    Stephanou, Pavlos S; Mavrantzas, Vlasis G

    2014-06-01

    We present a hierarchical computational methodology which permits the accurate prediction of the linear viscoelastic properties of entangled polymer melts directly from the chemical structure, chemical composition, and molecular architecture of the constituent chains. The method entails three steps: execution of long molecular dynamics simulations with moderately entangled polymer melts, self-consistent mapping of the accumulated trajectories onto a tube model and parameterization or fine-tuning of the model on the basis of detailed simulation data, and use of the modified tube model to predict the linear viscoelastic properties of significantly higher molecular weight (MW) melts of the same polymer. Predictions are reported for the zero-shear-rate viscosity η0 and the spectra of storage G'(ω) and loss G″(ω) moduli for several mono and bidisperse cis- and trans-1,4 polybutadiene melts as well as for their MW dependence, and are found to be in remarkable agreement with experimentally measured rheological data. PMID:24908037

  11. Accurate prediction of the linear viscoelastic properties of highly entangled mono and bidisperse polymer melts

    NASA Astrophysics Data System (ADS)

    Stephanou, Pavlos S.; Mavrantzas, Vlasis G.

    2014-06-01

    We present a hierarchical computational methodology which permits the accurate prediction of the linear viscoelastic properties of entangled polymer melts directly from the chemical structure, chemical composition, and molecular architecture of the constituent chains. The method entails three steps: execution of long molecular dynamics simulations with moderately entangled polymer melts, self-consistent mapping of the accumulated trajectories onto a tube model and parameterization or fine-tuning of the model on the basis of detailed simulation data, and use of the modified tube model to predict the linear viscoelastic properties of significantly higher molecular weight (MW) melts of the same polymer. Predictions are reported for the zero-shear-rate viscosity η0 and the spectra of storage G'(ω) and loss G″(ω) moduli for several mono and bidisperse cis- and trans-1,4 polybutadiene melts as well as for their MW dependence, and are found to be in remarkable agreement with experimentally measured rheological data.

  12. Numerical Computation of a Continuous-thrust State Transition Matrix Incorporating Accurate Hardware and Ephemeris Models

    NASA Technical Reports Server (NTRS)

    Ellison, Donald; Conway, Bruce; Englander, Jacob

    2015-01-01

    A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.

  13. Prediction of Accurate Thermochemistry of Medium and Large Sized Radicals Using Connectivity-Based Hierarchy (CBH).

    PubMed

    Sengupta, Arkajyoti; Raghavachari, Krishnan

    2014-10-14

    Accurate modeling of the chemical reactions in many diverse areas such as combustion, photochemistry, or atmospheric chemistry strongly depends on the availability of thermochemical information of the radicals involved. However, accurate thermochemical investigations of radical systems using state of the art composite methods have mostly been restricted to the study of hydrocarbon radicals of modest size. In an alternative approach, systematic error-canceling thermochemical hierarchy of reaction schemes can be applied to yield accurate results for such systems. In this work, we have extended our connectivity-based hierarchy (CBH) method to the investigation of radical systems. We have calibrated our method using a test set of 30 medium sized radicals to evaluate their heats of formation. The CBH-rad30 test set contains radicals containing diverse functional groups as well as cyclic systems. We demonstrate that the sophisticated error-canceling isoatomic scheme (CBH-2) with modest levels of theory is adequate to provide heats of formation accurate to ∼1.5 kcal/mol. Finally, we predict heats of formation of 19 other large and medium sized radicals for which the accuracy of available heats of formation are less well-known. PMID:26588131

  14. Forecasts of time averages with a numerical weather prediction model

    NASA Technical Reports Server (NTRS)

    Roads, J. O.

    1986-01-01

    Forecasts of time averages of 1-10 days in duration by an operational numerical weather prediction model are documented for the global 500 mb height field in spectral space. Error growth in very idealized models is described in order to anticipate various features of these forecasts and in order to anticipate what the results might be if forecasts longer than 10 days were carried out by present day numerical weather prediction models. The data set for this study is described, and the equilibrium spectra and error spectra are documented; then, the total error is documented. It is shown how forecasts can immediately be improved by removing the systematic error, by using statistical filters, and by ignoring forecasts beyond about a week. Temporal variations in the error field are also documented.

  15. A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes

    SciTech Connect

    Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.; Arkin, Adam P.

    2004-12-01

    We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, and its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.

  16. Machine Learning Predictions of Molecular Properties: Accurate Many-Body Potentials and Nonlocality in Chemical Space.

    PubMed

    Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; Pronobis, Wiktor; von Lilienfeld, O Anatole; Müller, Klaus-Robert; Tkatchenko, Alexandre

    2015-06-18

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. In addition, the same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.

  17. Machine learning predictions of molecular properties: Accurate many-body potentials and nonlocality in chemical space

    SciTech Connect

    Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; Pronobis, Wiktor; von Lilienfeld, O. Anatole; Müller, Klaus -Robert; Tkatchenko, Alexandre

    2015-06-04

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.

  18. Machine learning predictions of molecular properties: Accurate many-body potentials and nonlocality in chemical space

    DOE PAGES

    Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; Pronobis, Wiktor; von Lilienfeld, O. Anatole; Müller, Klaus -Robert; Tkatchenko, Alexandre

    2015-06-04

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstratemore » prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.« less

  19. Machine Learning Predictions of Molecular Properties: Accurate Many-Body Potentials and Nonlocality in Chemical Space

    PubMed Central

    2015-01-01

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. In addition, the same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies. PMID:26113956

  20. Effects of the inlet conditions and blood models on accurate prediction of hemodynamics in the stented coronary arteries

    NASA Astrophysics Data System (ADS)

    Jiang, Yongfei; Zhang, Jun; Zhao, Wanhua

    2015-05-01

    Hemodynamics altered by stent implantation is well-known to be closely related to in-stent restenosis. Computational fluid dynamics (CFD) method has been used to investigate the hemodynamics in stented arteries in detail and help to analyze the performances of stents. In this study, blood models with Newtonian or non-Newtonian properties were numerically investigated for the hemodynamics at steady or pulsatile inlet conditions respectively employing CFD based on the finite volume method. The results showed that the blood model with non-Newtonian property decreased the area of low wall shear stress (WSS) compared with the blood model with Newtonian property and the magnitude of WSS varied with the magnitude and waveform of the inlet velocity. The study indicates that the inlet conditions and blood models are all important for accurately predicting the hemodynamics. This will be beneficial to estimate the performances of stents and also help clinicians to select the proper stents for the patients.

  1. Development and Validation of a Multidisciplinary Tool for Accurate and Efficient Rotorcraft Noise Prediction (MUTE)

    NASA Technical Reports Server (NTRS)

    Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris

    2011-01-01

    A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.

  2. Numerical Methodology for Coupled Time-Accurate Simulations of Primary and Secondary Flowpaths in Gas Turbines

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.

    2006-01-01

    Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.

  3. Towards more accurate numerical modeling of impedance based high frequency harmonic vibration

    NASA Astrophysics Data System (ADS)

    Lim, Yee Yan; Kiong Soh, Chee

    2014-03-01

    The application of smart materials in various fields of engineering has recently become increasingly popular. For instance, the high frequency based electromechanical impedance (EMI) technique employing smart piezoelectric materials is found to be versatile in structural health monitoring (SHM). Thus far, considerable efforts have been made to study and improve the technique. Various theoretical models of the EMI technique have been proposed in an attempt to better understand its behavior. So far, the three-dimensional (3D) coupled field finite element (FE) model has proved to be the most accurate. However, large discrepancies between the results of the FE model and experimental tests, especially in terms of the slope and magnitude of the admittance signatures, continue to exist and are yet to be resolved. This paper presents a series of parametric studies using the 3D coupled field finite element method (FEM) on all properties of materials involved in the lead zirconate titanate (PZT) structure interaction of the EMI technique, to investigate their effect on the admittance signatures acquired. FE model updating is then performed by adjusting the parameters to match the experimental results. One of the main reasons for the lower accuracy, especially in terms of magnitude and slope, of previous FE models is the difficulty in determining the damping related coefficients and the stiffness of the bonding layer. In this study, using the hysteretic damping model in place of Rayleigh damping, which is used by most researchers in this field, and updated bonding stiffness, an improved and more accurate FE model is achieved. The results of this paper are expected to be useful for future study of the subject area in terms of research and application, such as modeling, design and optimization.

  4. SIFTER search: a web server for accurate phylogeny-based protein function prediction

    DOE PAGES

    Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.

    2015-05-15

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access tomore » precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.« less

  5. Accurate Prediction of Severe Allergic Reactions by a Small Set of Environmental Parameters (NDVI, Temperature)

    PubMed Central

    Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias

    2015-01-01

    Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions. PMID:25794106

  6. SIFTER search: a web server for accurate phylogeny-based protein function prediction.

    PubMed

    Sahraeian, Sayed M; Luo, Kevin R; Brenner, Steven E

    2015-07-01

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. The SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded. PMID:25979264

  7. Microstructure-Dependent Gas Adsorption: Accurate Predictions of Methane Uptake in Nanoporous Carbons

    SciTech Connect

    Ihm, Yungok; Cooper, Valentino R; Gallego, Nidia C; Contescu, Cristian I; Morris, James R

    2014-01-01

    We demonstrate a successful, efficient framework for predicting gas adsorption properties in real materials based on first-principles calculations, with a specific comparison of experiment and theory for methane adsorption in activated carbons. These carbon materials have different pore size distributions, leading to a variety of uptake characteristics. Utilizing these distributions, we accurately predict experimental uptakes and heats of adsorption without empirical potentials or lengthy simulations. We demonstrate that materials with smaller pores have higher heats of adsorption, leading to a higher gas density in these pores. This pore-size dependence must be accounted for, in order to predict and understand the adsorption behavior. The theoretical approach combines: (1) ab initio calculations with a van der Waals density functional to determine adsorbent-adsorbate interactions, and (2) a thermodynamic method that predicts equilibrium adsorption densities by directly incorporating the calculated potential energy surface in a slit pore model. The predicted uptake at P=20 bar and T=298 K is in excellent agreement for all five activated carbon materials used. This approach uses only the pore-size distribution as an input, with no fitting parameters or empirical adsorbent-adsorbate interactions, and thus can be easily applied to other adsorbent-adsorbate combinations.

  8. SIFTER search: a web server for accurate phylogeny-based protein function prediction

    SciTech Connect

    Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.

    2015-05-15

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.

  9. Change in heat capacity accurately predicts vibrational coupling in enzyme catalyzed reactions.

    PubMed

    Arcus, Vickery L; Pudney, Christopher R

    2015-08-01

    The temperature dependence of kinetic isotope effects (KIEs) have been used to infer the vibrational coupling of the protein and or substrate to the reaction coordinate, particularly in enzyme-catalyzed hydrogen transfer reactions. We find that a new model for the temperature dependence of experimentally determined observed rate constants (macromolecular rate theory, MMRT) is able to accurately predict the occurrence of vibrational coupling, even where the temperature dependence of the KIE fails. This model, that incorporates the change in heat capacity for enzyme catalysis, demonstrates remarkable consistency with both experiment and theory and in many respects is more robust than models used at present.

  10. Accurate verification of the conserved-vector-current and standard-model predictions

    SciTech Connect

    Sirlin, A.; Zucchini, R.

    1986-10-20

    An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.

  11. The use of experimental bending tests to more accurate numerical description of TBC damage process

    NASA Astrophysics Data System (ADS)

    Sadowski, T.; Golewski, P.

    2016-04-01

    Thermal barrier coatings (TBCs) have been extensively used in aircraft engines to protect critical engine parts such as blades and combustion chambers, which are exposed to high temperatures and corrosive environment. The blades of turbine engines are additionally exposed to high mechanical loads. These loads are created by the high rotational speed of the rotor (30 000 rot/min), causing the tensile and bending stresses. Therefore, experimental testing of coated samples is necessary in order to determine strength properties of TBCs. Beam samples with dimensions 50×10×2 mm were used in those studies. The TBC system consisted of 150 μm thick bond coat (NiCoCrAlY) and 300 μm thick top coat (YSZ) made by APS (air plasma spray) process. Samples were tested by three-point bending test with various loads. After bending tests, the samples were subjected to microscopic observation to determine the quantity of cracks and their depth. The above mentioned results were used to build numerical model and calibrate material data in Abaqus program. Brittle cracking damage model was applied for the TBC layer, which allows to remove elements after reaching criterion. Surface based cohesive behavior was used to model the delamination which may occur at the boundary between bond coat and top coat.

  12. New efficient optimizing techniques for Kalman filters and numerical weather prediction models

    NASA Astrophysics Data System (ADS)

    Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis

    2016-06-01

    The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.

  13. Theoretical and numerical predictions of hypervelocity impact-generated plasma

    SciTech Connect

    Li, Jianqiao; Song, Weidong Ning, Jianguo

    2014-08-15

    The hypervelocity impact generated plasmas (HVIGP) in thermodynamic non-equilibrium state were theoretically analyzed, and a physical model was presented to explore the relationship between plasma ionization degree and internal energy of the system by a group of equations including a chemical reaction equilibrium equation, a chemical reaction rate equation, and an energy conservation equation. A series of AUTODYN 3D (a widely used software in dynamic numerical simulations and developed by Century Dynamic Inc.) numerical simulations of the impacts of hypervelocity Al projectile on its targets at different incident angles were performed. The internal energy and the material density obtained from the numerical simulations were then used to calculate the ionization degree and the electron temperature. Based on a self-developed 2D smooth particle hydrodynamic (SPH) code and the theoretical model, the plasmas generated by 6 hypervelocity impacts were directly simulated and their total charges were calculated. The numerical results are in good agreements with the experimental results as well as the empirical formulas, demonstrating that the theoretical model is justified by the AUTODYN 3D and self-developed 2D SPH simulations and applicable to predict HVIGPs. The study is of significance for astrophysical and cosmonautic researches and safety.

  14. ILT based defect simulation of inspection images accurately predicts mask defect printability on wafer

    NASA Astrophysics Data System (ADS)

    Deep, Prakash; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2016-05-01

    At advanced technology nodes mask complexity has been increased because of large-scale use of resolution enhancement technologies (RET) which includes Optical Proximity Correction (OPC), Inverse Lithography Technology (ILT) and Source Mask Optimization (SMO). The number of defects detected during inspection of such mask increased drastically and differentiation of critical and non-critical defects are more challenging, complex and time consuming. Because of significant defectivity of EUVL masks and non-availability of actinic inspection, it is important and also challenging to predict the criticality of defects for printability on wafer. This is one of the significant barriers for the adoption of EUVL for semiconductor manufacturing. Techniques to decide criticality of defects from images captured using non actinic inspection images is desired till actinic inspection is not available. High resolution inspection of photomask images detects many defects which are used for process and mask qualification. Repairing all defects is not practical and probably not required, however it's imperative to know which defects are severe enough to impact wafer before repair. Additionally, wafer printability check is always desired after repairing a defect. AIMSTM review is the industry standard for this, however doing AIMSTM review for all defects is expensive and very time consuming. Fast, accurate and an economical mechanism is desired which can predict defect printability on wafer accurately and quickly from images captured using high resolution inspection machine. Predicting defect printability from such images is challenging due to the fact that the high resolution images do not correlate with actual mask contours. The challenge is increased due to use of different optical condition during inspection other than actual scanner condition, and defects found in such images do not have correlation with actual impact on wafer. Our automated defect simulation tool predicts

  15. Evaluating the Impact of Aerosols on Numerical Weather Prediction

    NASA Astrophysics Data System (ADS)

    Freitas, Saulo; Silva, Arlindo; Benedetti, Angela; Grell, Georg; Members, Wgne; Zarzur, Mauricio

    2015-04-01

    The Working Group on Numerical Experimentation (WMO, http://www.wmo.int/pages/about/sec/rescrosscut/resdept_wgne.html) has organized an exercise to evaluate the impact of aerosols on NWP. This exercise will involve regional and global models currently used for weather forecast by the operational centers worldwide and aims at addressing the following questions: a) How important are aerosols for predicting the physical system (NWP, seasonal, climate) as distinct from predicting the aerosols themselves? b) How important is atmospheric model quality for air quality forecasting? c) What are the current capabilities of NWP models to simulate aerosol impacts on weather prediction? Toward this goal we have selected 3 strong or persistent events of aerosol pollution worldwide that could be fairly represented in current NWP models and that allowed for an evaluation of the aerosol impact on weather prediction. The selected events includes a strong dust storm that blew off the coast of Libya and over the Mediterranean, an extremely severe episode of air pollution in Beijing and surrounding areas, and an extreme case of biomass burning smoke in Brazil. The experimental design calls for simulations with and without explicitly accounting for aerosol feedbacks in the cloud and radiation parameterizations. In this presentation we will summarize the results of this study focusing on the evaluation of model performance in terms of its ability to faithfully simulate aerosol optical depth, and the assessment of the aerosol impact on the predictions of near surface wind, temperature, humidity, rainfall and the surface energy budget.

  16. An Improved Numerical Integration Method for Springback Predictions

    NASA Astrophysics Data System (ADS)

    Ibrahim, R.; Smith, L. M.; Golovashchenko, Sergey F.

    2011-08-01

    In this investigation, the focus is on the springback of steel sheets in V-die air bending. A full replication to a numerical integration algorithm presented rigorously in [1] to predict the springback in air bending was performed and confirmed successfully. Algorithm alteration and extensions were proposed here. The altered approach used in solving the moment equation numerically resulted in springback values much closer to the trend presented by the experimental data, Although investigation here extended to use a more realistic work-hardening model, the differences in the springback values obtained by both hardening models were almost negligible. The algorithm was extended to be applied on thin sheets down to 0.8 mm. Results show that this extension is possible as verified by FEA and other published experiments on TRIP steel sheets.

  17. Numerical prediction of low frequency combustion instability in a model ramjet combustor

    SciTech Connect

    Shang, H.M.; Chen, Y.S.; Shih, M.S.; Farmer, R.C.

    1996-12-31

    A numerical analysis has been conducted for low-frequency combustion instability in a model ramjet combustor. The facility is two-dimensional, and is comprised of a long inlet duct, a dump combustor chamber, and an exhaust nozzle. The experiments observed that the combustor pressure oscillation under the particular operating condition did not have much cycle-to-cycle variation. The main resonant frequency occurs at about 65 Hz for this case. In the numerical analysis, a time accurate Computational Fluid Dynamics (CFD) code with a pressure-correction algorithm is used, and the combustion process was modeled with a single step chemistry model and a modified eddy breakup model. A high-order upwind scheme with flux limiter is used for convection terms. The convergence of the linear algebraic equations is accelerated through a preconditioned conjugate gradient matrix solver. The numerical predictions show that the flame oscillates in the combustion chamber at the calculation condition and are justified by the experimental schlieren photographs. The numerical analyses correctly predict the chamber pressure oscillation frequency is over-predicted compared with the experimental data. The discrepancy can be explained by the simplified turbulence and combustion model used in this study, and the uncertainty of the inlet boundary conditions.

  18. Numerical geology: Predicting depositional and diagenetic facies from wireline logs using core data

    SciTech Connect

    Altunbay, M.; Barr, D.C.; Kennaird, A.F.; Manning, D.K.

    1994-12-31

    To exploit a reservoir, the geological model must accurately define the depositional environment and the effects of diagenesis on the pore network. Current methods for establishing the geological model of a field usually require subjective, qualitative interpretation of geological and petrophysical data. A method--Numerical Geology--has been developed that greatly reduces the subjectivity in geological modeling efforts. This method also allows geological attributes to be quantified and predicted. Numerical Geology involves the integration of petrophysical, petrological and geological data with wireline log responses. The geology of ``Hydraulic or Flow Units`` intervals with similar hydraulic characteristics is described using conventional sedimentology, petrography and core analysis data. These data are translated into a matrix of geological indices classified according to hydraulic unit profile of the section. Hydraulic units are then predicted for uncored sections based on their unique log signatures that are obtained from cored sections. By combining predicted hydraulic units profile with the matrix of geological indices for each flow unit, profiles of geological attributes are derived. The prediction reliability of hydraulic units is calculated based on the uniqueness of log signatures for each flow unit. Therefore, the confidence level for geological predictions can be assigned to estimated profiles of geological attributes. This eliminates much of the subjectivity from future geological interpretations and predictions.

  19. Toward an Accurate Prediction of the Arrival Time of Geomagnetic-Effective Coronal Mass Ejections

    NASA Astrophysics Data System (ADS)

    Shi, T.; Wang, Y.; Wan, L.; Cheng, X.; Ding, M.; Zhang, J.

    2015-12-01

    Accurately predicting the arrival of coronal mass ejections (CMEs) to the Earth based on remote images is of critical significance for the study of space weather. Here we make a statistical study of 21 Earth-directed CMEs, specifically exploring the relationship between CME initial speeds and transit times. The initial speed of a CME is obtained by fitting the CME with the Graduated Cylindrical Shell model and is thus free of projection effects. We then use the drag force model to fit results of the transit time versus the initial speed. By adopting different drag regimes, i.e., the viscous, aerodynamics, and hybrid regimes, we get similar results, with a least mean estimation error of the hybrid model of 12.9 hr. CMEs with a propagation angle (the angle between the propagation direction and the Sun-Earth line) larger than their half-angular widths arrive at the Earth with an angular deviation caused by factors other than the radial solar wind drag. The drag force model cannot be reliably applied to such events. If we exclude these events in the sample, the prediction accuracy can be improved, i.e., the estimation error reduces to 6.8 hr. This work suggests that it is viable to predict the arrival time of CMEs to the Earth based on the initial parameters with fairly good accuracy. Thus, it provides a method of forecasting space weather 1-5 days following the occurrence of CMEs.

  20. Intermolecular potentials and the accurate prediction of the thermodynamic properties of water

    SciTech Connect

    Shvab, I.; Sadus, Richard J.

    2013-11-21

    The ability of intermolecular potentials to correctly predict the thermodynamic properties of liquid water at a density of 0.998 g/cm{sup 3} for a wide range of temperatures (298–650 K) and pressures (0.1–700 MPa) is investigated. Molecular dynamics simulations are reported for the pressure, thermal pressure coefficient, thermal expansion coefficient, isothermal and adiabatic compressibilities, isobaric and isochoric heat capacities, and Joule-Thomson coefficient of liquid water using the non-polarizable SPC/E and TIP4P/2005 potentials. The results are compared with both experiment data and results obtained from the ab initio-based Matsuoka-Clementi-Yoshimine non-additive (MCYna) [J. Li, Z. Zhou, and R. J. Sadus, J. Chem. Phys. 127, 154509 (2007)] potential, which includes polarization contributions. The data clearly indicate that both the SPC/E and TIP4P/2005 potentials are only in qualitative agreement with experiment, whereas the polarizable MCYna potential predicts some properties within experimental uncertainty. This highlights the importance of polarizability for the accurate prediction of the thermodynamic properties of water, particularly at temperatures beyond 298 K.

  1. Direct Pressure Monitoring Accurately Predicts Pulmonary Vein Occlusion During Cryoballoon Ablation

    PubMed Central

    Kosmidou, Ioanna; Wooden, Shannnon; Jones, Brian; Deering, Thomas; Wickliffe, Andrew; Dan, Dan

    2013-01-01

    Cryoballoon ablation (CBA) is an established therapy for atrial fibrillation (AF). Pulmonary vein (PV) occlusion is essential for achieving antral contact and PV isolation and is typically assessed by contrast injection. We present a novel method of direct pressure monitoring for assessment of PV occlusion. Transcatheter pressure is monitored during balloon advancement to the PV antrum. Pressure is recorded via a single pressure transducer connected to the inner lumen of the cryoballoon. Pressure curve characteristics are used to assess occlusion in conjunction with fluoroscopic or intracardiac echocardiography (ICE) guidance. PV occlusion is confirmed when loss of typical left atrial (LA) pressure waveform is observed with recordings of PA pressure characteristics (no A wave and rapid V wave upstroke). Complete pulmonary vein occlusion as assessed with this technique has been confirmed with concurrent contrast utilization during the initial testing of the technique and has been shown to be highly accurate and readily reproducible. We evaluated the efficacy of this novel technique in 35 patients. A total of 128 veins were assessed for occlusion with the cryoballoon utilizing the pressure monitoring technique; occlusive pressure was demonstrated in 113 veins with resultant successful pulmonary vein isolation in 111 veins (98.2%). Occlusion was confirmed with subsequent contrast injection during the initial ten procedures, after which contrast utilization was rapidly reduced or eliminated given the highly accurate identification of occlusive pressure waveform with limited initial training. Verification of PV occlusive pressure during CBA is a novel approach to assessing effective PV occlusion and it accurately predicts electrical isolation. Utilization of this method results in significant decrease in fluoroscopy time and volume of contrast. PMID:23485956

  2. A fast and accurate method to predict 2D and 3D aerodynamic boundary layer flows

    NASA Astrophysics Data System (ADS)

    Bijleveld, H. A.; Veldman, A. E. P.

    2014-12-01

    A quasi-simultaneous interaction method is applied to predict 2D and 3D aerodynamic flows. This method is suitable for offshore wind turbine design software as it is a very accurate and computationally reasonably cheap method. This study shows the results for a NACA 0012 airfoil. The two applied solvers converge to the experimental values when the grid is refined. We also show that in separation the eigenvalues remain positive thus avoiding the Goldstein singularity at separation. In 3D we show a flow over a dent in which separation occurs. A rotating flat plat is used to show the applicability of the method for rotating flows. The shown capabilities of the method indicate that the quasi-simultaneous interaction method is suitable for design methods for offshore wind turbine blades.

  3. Distance scaling method for accurate prediction of slowly varying magnetic fields in satellite missions

    NASA Astrophysics Data System (ADS)

    Zacharias, Panagiotis P.; Chatzineofytou, Elpida G.; Spantideas, Sotirios T.; Capsalis, Christos N.

    2016-07-01

    In the present work, the determination of the magnetic behavior of localized magnetic sources from near-field measurements is examined. The distance power law of the magnetic field fall-off is used in various cases to accurately predict the magnetic signature of an equipment under test (EUT) consisting of multiple alternating current (AC) magnetic sources. Therefore, parameters concerning the location of the observation points (magnetometers) are studied towards this scope. The results clearly show that these parameters are independent of the EUT's size and layout. Additionally, the techniques developed in the present study enable the placing of the magnetometers close to the EUT, thus achieving high signal-to-noise ratio (SNR). Finally, the proposed method is verified by real measurements, using a mobile phone as an EUT.

  4. Differential contribution of visual and auditory information to accurately predict the direction and rotational motion of a visual stimulus.

    PubMed

    Park, Seoung Hoon; Kim, Seonjin; Kwon, MinHyuk; Christou, Evangelos A

    2016-03-01

    Vision and auditory information are critical for perception and to enhance the ability of an individual to respond accurately to a stimulus. However, it is unknown whether visual and auditory information contribute differentially to identify the direction and rotational motion of the stimulus. The purpose of this study was to determine the ability of an individual to accurately predict the direction and rotational motion of the stimulus based on visual and auditory information. In this study, we recruited 9 expert table-tennis players and used table-tennis service as our experimental model. Participants watched recorded services with different levels of visual and auditory information. The goal was to anticipate the direction of the service (left or right) and the rotational motion of service (topspin, sidespin, or cut). We recorded their responses and quantified the following outcomes: (i) directional accuracy and (ii) rotational motion accuracy. The response accuracy was the accurate predictions relative to the total number of trials. The ability of the participants to predict the direction of the service accurately increased with additional visual information but not with auditory information. In contrast, the ability of the participants to predict the rotational motion of the service accurately increased with the addition of auditory information to visual information but not with additional visual information alone. In conclusion, this finding demonstrates that visual information enhances the ability of an individual to accurately predict the direction of the stimulus, whereas additional auditory information enhances the ability of an individual to accurately predict the rotational motion of stimulus.

  5. In vitro transcription accurately predicts lac repressor phenotype in vivo in Escherichia coli

    PubMed Central

    2014-01-01

    A multitude of studies have looked at the in vivo and in vitro behavior of the lac repressor binding to DNA and effector molecules in order to study transcriptional repression, however these studies are not always reconcilable. Here we use in vitro transcription to directly mimic the in vivo system in order to build a self consistent set of experiments to directly compare in vivo and in vitro genetic repression. A thermodynamic model of the lac repressor binding to operator DNA and effector is used to link DNA occupancy to either normalized in vitro mRNA product or normalized in vivo fluorescence of a regulated gene, YFP. An accurate measurement of repressor, DNA and effector concentrations were made both in vivo and in vitro allowing for direct modeling of the entire thermodynamic equilibrium. In vivo repression profiles are accurately predicted from the given in vitro parameters when molecular crowding is considered. Interestingly, our measured repressor–operator DNA affinity differs significantly from previous in vitro measurements. The literature values are unable to replicate in vivo binding data. We therefore conclude that the repressor-DNA affinity is much weaker than previously thought. This finding would suggest that in vitro techniques that are specifically designed to mimic the in vivo process may be necessary to replicate the native system. PMID:25097824

  6. Measuring solar reflectance Part I: Defining a metric that accurately predicts solar heat gain

    SciTech Connect

    Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul

    2010-05-14

    Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective 'cool colored' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland U.S. latitudes, this metric RE891BN can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {le} 5:12 [23{sup o}]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool-roof net energy savings by as much as 23%. We define clear-sky air mass one global horizontal ('AM1GH') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer.

  7. Measuring solar reflectance - Part I: Defining a metric that accurately predicts solar heat gain

    SciTech Connect

    Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul

    2010-09-15

    Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective ''cool colored'' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland US latitudes, this metric R{sub E891BN} can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {<=} 5:12 [23 ]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool roof net energy savings by as much as 23%. We define clear sky air mass one global horizontal (''AM1GH'') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer. (author)

  8. Highly Accurate Prediction of Protein-Protein Interactions via Incorporating Evolutionary Information and Physicochemical Characteristics

    PubMed Central

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru

    2016-01-01

    Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research. PMID:27571061

  9. Highly Accurate Prediction of Protein-Protein Interactions via Incorporating Evolutionary Information and Physicochemical Characteristics.

    PubMed

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru

    2016-01-01

    Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research. PMID:27571061

  10. Highly Accurate Prediction of Protein-Protein Interactions via Incorporating Evolutionary Information and Physicochemical Characteristics.

    PubMed

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru

    2016-01-01

    Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research.

  11. Accurate prediction of solvent accessibility using neural networks-based regression.

    PubMed

    Adamczak, Rafał; Porollo, Aleksey; Meller, Jarosław

    2004-09-01

    Accurate prediction of relative solvent accessibilities (RSAs) of amino acid residues in proteins may be used to facilitate protein structure prediction and functional annotation. Toward that goal we developed a novel method for improved prediction of RSAs. Contrary to other machine learning-based methods from the literature, we do not impose a classification problem with arbitrary boundaries between the classes. Instead, we seek a continuous approximation of the real-value RSA using nonlinear regression, with several feed forward and recurrent neural networks, which are then combined into a consensus predictor. A set of 860 protein structures derived from the PFAM database was used for training, whereas validation of the results was carefully performed on several nonredundant control sets comprising a total of 603 structures derived from new Protein Data Bank structures and had no homology to proteins included in the training. Two classes of alternative predictors were developed for comparison with the regression-based approach: one based on the standard classification approach and the other based on a semicontinuous approximation with the so-called thermometer encoding. Furthermore, a weighted approximation, with errors being scaled by the observed levels of variability in RSA for equivalent residues in families of homologous structures, was applied in order to improve the results. The effects of including evolutionary profiles and the growth of sequence databases were assessed. In accord with the observed levels of variability in RSA for different ranges of RSA values, the regression accuracy is higher for buried than for exposed residues, with overall 15.3-15.8% mean absolute errors and correlation coefficients between the predicted and experimental values of 0.64-0.67 on different control sets. The new method outperforms classification-based algorithms when the real value predictions are projected onto two-class classification problems with several commonly

  12. An operational phenological model for numerical pollen prediction

    NASA Astrophysics Data System (ADS)

    Scheifinger, Helfried

    2010-05-01

    The general prevalence of seasonal allergic rhinitis is estimated to be about 15% in Europe, and still increasing. Pre-emptive measures require both the reliable assessment of production and release of various pollen species and the forecasting of their atmospheric dispersion. For this purpose numerical pollen prediction schemes are being developed by a number of European weather services in order to supplement and improve the qualitative pollen prediction systems by state of the art instruments. Pollen emission is spatially and temporally highly variable throughout the vegetation period and not directly observed, which precludes a straightforward application of dispersion models to simulate pollen transport. Even the beginning and end of flowering, which indicates the time period of potential pollen emission, is not (yet) available in real time. One way to create a proxy for the beginning, the course and the end of the pollen emission is its simulation as function of real time temperature observations. In this work the European phenological data set of the COST725 initiative forms the basis of modelling the beginning of flowering of 15 species, some of which emit allergic pollen. In order to keep the problem as simple as possible for the sake of spatial interpolation, a 3 parameter temperature sum model was implemented in a real time operational procedure, which calculates the spatial distribution of the entry dates for the current day and 24, 48 and 72 hours in advance. As stand alone phenological model and combined with back trajectories it is thought to support the qualitative pollen prediction scheme at the Austrian national weather service. Apart from that it is planned to incorporate it in a numerical pollen dispersion model. More details, open questions and first results of the operation phenological model will be discussed and presented.

  13. A Free Wake Numerical Simulation for Darrieus Vertical Axis Wind Turbine Performance Prediction

    NASA Astrophysics Data System (ADS)

    Belu, Radian

    2010-11-01

    In the last four decades, several aerodynamic prediction models have been formulated for the Darrieus wind turbine performances and characteristics. We can identified two families: stream-tube and vortex. The paper presents a simplified numerical techniques for simulating vertical axis wind turbine flow, based on the lifting line theory and a free vortex wake model, including dynamic stall effects for predicting the performances of a 3-D vertical axis wind turbine. A vortex model is used in which the wake is composed of trailing stream-wise and shedding span-wise vortices, whose strengths are equal to the change in the bound vortex strength as required by the Helmholz and Kelvin theorems. Performance parameters are computed by application of the Biot-Savart law along with the Kutta-Jukowski theorem and a semi-empirical stall model. We tested the developed model with an adaptation of the earlier multiple stream-tube performance prediction model for the Darrieus turbines. Predictions by using our method are shown to compare favorably with existing experimental data and the outputs of other numerical models. The method can predict accurately the local and global performances of a vertical axis wind turbine, and can be used in the design and optimization of wind turbines for built environment applications.

  14. An accurate numerical solution to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in rivers

    NASA Astrophysics Data System (ADS)

    Stecca, Guglielmo; Siviglia, Annunziato; Blom, Astrid

    2016-07-01

    We present an accurate numerical approximation to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in one space dimension. Our solution procedure originates from the fully-unsteady matrix-vector formulation developed in [54]. The principal part of the problem is solved by an explicit Finite Volume upwind method of the path-conservative type, by which all the variables are updated simultaneously in a coupled fashion. The solution to the principal part is embedded into a splitting procedure for the treatment of frictional source terms. The numerical scheme is extended to second-order accuracy and includes a bookkeeping procedure for handling the evolution of size stratification in the substrate. We develop a concept of balancedness for the vertical mass flux between the substrate and active layer under bed degradation, which prevents the occurrence of non-physical oscillations in the grainsize distribution of the substrate. We suitably modify the numerical scheme to respect this principle. We finally verify the accuracy in our solution to the equations, and its ability to reproduce one-dimensional morphodynamics due to streamwise and vertical sorting, using three test cases. In detail, (i) we empirically assess the balancedness of vertical mass fluxes under degradation; (ii) we study the convergence to the analytical linearised solution for the propagation of infinitesimal-amplitude waves [54], which is here employed for the first time to assess a mixed-sediment model; (iii) we reproduce Ribberink's E8-E9 flume experiment [46].

  15. A robust and accurate numerical method for transcritical turbulent flows at supercritical pressure with an arbitrary equation of state

    NASA Astrophysics Data System (ADS)

    Kawai, Soshi; Terashima, Hiroshi; Negishi, Hideyo

    2015-11-01

    This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture the steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier-Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.

  16. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.

    PubMed

    Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish

    2016-04-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143

  17. Accurate load prediction by BEM with airfoil data from 3D RANS simulations

    NASA Astrophysics Data System (ADS)

    Schneider, Marc S.; Nitzsche, Jens; Hennings, Holger

    2016-09-01

    In this paper, two methods for the extraction of airfoil coefficients from 3D CFD simulations of a wind turbine rotor are investigated, and these coefficients are used to improve the load prediction of a BEM code. The coefficients are extracted from a number of steady RANS simulations, using either averaging of velocities in annular sections, or an inverse BEM approach for determination of the induction factors in the rotor plane. It is shown that these 3D rotor polars are able to capture the rotational augmentation at the inner part of the blade as well as the load reduction by 3D effects close to the blade tip. They are used as input to a simple BEM code and the results of this BEM with 3D rotor polars are compared to the predictions of BEM with 2D airfoil coefficients plus common empirical corrections for stall delay and tip loss. While BEM with 2D airfoil coefficients produces a very different radial distribution of loads than the RANS simulation, the BEM with 3D rotor polars manages to reproduce the loads from RANS very accurately for a variety of load cases, as long as the blade pitch angle is not too different from the cases from which the polars were extracted.

  18. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina

    PubMed Central

    Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish

    2016-01-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143

  19. ChIP-seq Accurately Predicts Tissue-Specific Activity of Enhancers

    SciTech Connect

    Visel, Axel; Blow, Matthew J.; Li, Zirong; Zhang, Tao; Akiyama, Jennifer A.; Holt, Amy; Plajzer-Frick, Ingrid; Shoukry, Malak; Wright, Crystal; Chen, Feng; Afzal, Veena; Ren, Bing; Rubin, Edward M.; Pennacchio, Len A.

    2009-02-01

    A major yet unresolved quest in decoding the human genome is the identification of the regulatory sequences that control the spatial and temporal expression of genes. Distant-acting transcriptional enhancers are particularly challenging to uncover since they are scattered amongst the vast non-coding portion of the genome. Evolutionary sequence constraint can facilitate the discovery of enhancers, but fails to predict when and where they are active in vivo. Here, we performed chromatin immunoprecipitation with the enhancer-associated protein p300, followed by massively-parallel sequencing, to map several thousand in vivo binding sites of p300 in mouse embryonic forebrain, midbrain, and limb tissue. We tested 86 of these sequences in a transgenic mouse assay, which in nearly all cases revealed reproducible enhancer activity in those tissues predicted by p300 binding. Our results indicate that in vivo mapping of p300 binding is a highly accurate means for identifying enhancers and their associated activities and suggest that such datasets will be useful to study the role of tissue-specific enhancers in human biology and disease on a genome-wide scale.

  20. Numerical predictions of EML (electromagnetic launcher) system performance

    SciTech Connect

    Schnurr, N.M.; Kerrisk, J.F.; Davidson, R.F.

    1987-01-01

    The performance of an electromagnetic launcher (EML) depends on a large number of parameters, including the characteristics of the power supply, rail geometry, rail and insulator material properties, injection velocity, and projectile mass. EML system performance is frequently limited by structural or thermal effects in the launcher (railgun). A series of computer codes has been developed at the Los Alamos National Laboratory to predict EML system performance and to determine the structural and thermal constraints on barrel design. These codes include FLD, a two-dimensional electrostatic code used to calculate the high-frequency inductance gradient and surface current density distribution for the rails; TOPAZRG, a two-dimensional finite-element code that simultaneously analyzes thermal and electromagnetic diffusion in the rails; and LARGE, a code that predicts the performance of the entire EML system. Trhe NIKE2D code, developed at the Lawrence Livermore National Laboratory, is used to perform structural analyses of the rails. These codes have been instrumental in the design of the Lethality Test System (LTS) at Los Alamos, which has an ultimate goal of accelerating a 30-g projectile to a velocity of 15 km/s. The capabilities of the individual codes and the coupling of these codes to perform a comprehensive analysis is discussed in relation to the LTS design. Numerical predictions are compared with experimental data and presented for the LTS prototype tests.

  1. Can radiation therapy treatment planning system accurately predict surface doses in postmastectomy radiation therapy patients?

    SciTech Connect

    Wong, Sharon; Back, Michael; Tan, Poh Wee; Lee, Khai Mun; Baggarley, Shaun; Lu, Jaide Jay

    2012-07-01

    Skin doses have been an important factor in the dose prescription for breast radiotherapy. Recent advances in radiotherapy treatment techniques, such as intensity-modulated radiation therapy (IMRT) and new treatment schemes such as hypofractionated breast therapy have made the precise determination of the surface dose necessary. Detailed information of the dose at various depths of the skin is also critical in designing new treatment strategies. The purpose of this work was to assess the accuracy of surface dose calculation by a clinically used treatment planning system and those measured by thermoluminescence dosimeters (TLDs) in a customized chest wall phantom. This study involved the construction of a chest wall phantom for skin dose assessment. Seven TLDs were distributed throughout each right chest wall phantom to give adequate representation of measured radiation doses. Point doses from the CMS Xio Registered-Sign treatment planning system (TPS) were calculated for each relevant TLD positions and results correlated. There were no significant difference between measured absorbed dose by TLD and calculated doses by the TPS (p > 0.05 (1-tailed). Dose accuracy of up to 2.21% was found. The deviations from the calculated absorbed doses were overall larger (3.4%) when wedges and bolus were used. 3D radiotherapy TPS is a useful and accurate tool to assess the accuracy of surface dose. Our studies have shown that radiation treatment accuracy expressed as a comparison between calculated doses (by TPS) and measured doses (by TLD dosimetry) can be accurately predicted for tangential treatment of the chest wall after mastectomy.

  2. Laser Hardening Prediction Tool Based On a Solid State Transformations Numerical Model

    SciTech Connect

    Martinez, S.; Ukar, E.; Lamikiz, A.

    2011-01-17

    This paper presents a tool to predict hardening layer in selective laser hardening processes where laser beam heats the part locally while the bulk acts as a heat sink.The tool to predict accurately the temperature field in the workpiece is a numerical model that combines a three dimensional transient numerical solution for heating where is possible to introduce different laser sources. The thermal field was modeled using a kinetic model based on Johnson-Mehl-Avrami equation. Considering this equation, an experimental adjustment of transformation parameters was carried out to get the heating transformation diagrams (CHT). With the temperature field and CHT diagrams the model predicts the percentage of base material converted into austenite. These two parameters are used as first step to estimate the depth of hardened layer in the part.The model has been adjusted and validated with experimental data for DIN 1.2379, cold work tool steel typically used in mold and die making industry. This steel presents solid state diffusive transformations at relative low temperature. These transformations must be considered in order to get good accuracy of temperature field prediction during heating phase. For model validation, surface temperature measured by pyrometry, thermal field as well as the hardened layer obtained from metallographic study, were compared with the model data showing a good adjustment.

  3. Laser Hardening Prediction Tool Based On a Solid State Transformations Numerical Model

    NASA Astrophysics Data System (ADS)

    Martínez, S.; Ukar, E.; Lamikiz, A.; Liebana, F.

    2011-01-01

    This paper presents a tool to predict hardening layer in selective laser hardening processes where laser beam heats the part locally while the bulk acts as a heat sink. The tool to predict accurately the temperature field in the workpiece is a numerical model that combines a three dimensional transient numerical solution for heating where is possible to introduce different laser sources. The thermal field was modeled using a kinetic model based on Johnson-Mehl-Avrami equation. Considering this equation, an experimental adjustment of transformation parameters was carried out to get the heating transformation diagrams (CHT). With the temperature field and CHT diagrams the model predicts the percentage of base material converted into austenite. These two parameters are used as first step to estimate the depth of hardened layer in the part. The model has been adjusted and validated with experimental data for DIN 1.2379, cold work tool steel typically used in mold and die making industry. This steel presents solid state diffusive transformations at relative low temperature. These transformations must be considered in order to get good accuracy of temperature field prediction during heating phase. For model validation, surface temperature measured by pyrometry, thermal field as well as the hardened layer obtained from metallographic study, were compared with the model data showing a good adjustment.

  4. Improvement of NCEP Numerical Weather Prediction with Use of Satellite Land Measurements

    NASA Astrophysics Data System (ADS)

    Zheng, W.; Ek, M. B.; Wei, H.; Meng, J.; Dong, J.; Wu, Y.; Zhan, X.; Liu, J.; Jiang, Z.; Vargas, M.

    2014-12-01

    Over the past two decades, satellite measurements are being increasingly used in weather and climate prediction systems and have made a considerable progress in accurate numerical weather and climate predictions. However, it is noticed that the utilization of satellite measurements over land is far less than over ocean, because of the high land surface inhomogeneity and the high emissivity variabilities in time and space of surface characteristics. In this presentation, we will discuss the application efforts of satellite land observations in the National Centers for Environmental Prediction (NCEP) operational Global Forecast System (GFS) in order to improve the global numerical weather prediction (NWP). Our study focuses on use of satellite data sets such as vegetation type and green vegetation fraction, assimilation of satellite products such as soil moisture retrieval, and direct radiance assimilation. Global soil moisture data products could be used for initialization of soil moisture state variables in numerical weather, climate and hydrological forecast models. A global Soil Moisture Operational Product System (SMOPS) has been developed at NOAA-NESDIS to continuously provide global soil moisture data products to meet NOAA-NCEP's soil moisture data needs. The impact of the soil moisture data products on numerical weather forecast is assessed using the NCEP GFS in which the Ensemble Kalman Filter (EnKF) data assimilation algorithm has been implemented. In terms of radiance assimilation, satellite radiance measurements in various spectral channels are assimilated through the JCSDA Community Radiative Transfer Model (CRTM) on the NCEP Gridpoint Statistical Interpolation (GSI) system, which requires the CRTM to calculate model brightness temperature (Tb) with input of model atmosphere profiles and surface parameters. Particularly, for surface sensitive channels (window channels), Tb largely depends on surface parameters such as land surface skin temperature, soil

  5. Numerical prediction of rail roughness growth on tangent railway tracks

    NASA Astrophysics Data System (ADS)

    Nielsen, J. C. O.

    2003-10-01

    Growth of railhead roughness (irregularities, waviness) is predicted through numerical simulation of dynamic train-track interaction on tangent track. The hypothesis is that wear is caused by longitudinal slip due to driven wheelsets, and that wear is proportional to the longitudinal frictional power in the contact patch. Emanating from an initial roughness spectrum corresponding to a new or a recent ground rail, an initial roughness profile is determined. Wheel-rail contact forces, creepages and wear for one wheelset passage are calculated in relation to location along a discretely supported track model. The calculated wear is scaled by a chosen number of wheelset passages, and is then added to the initial roughness profile. Field observations of rail corrugation on a Dutch track are used to validate the simulation model. Results from the simulations predict a large roughness growth rate for wavelengths around 30-40 mm. The large growth in this wavelength interval is explained by a low track receptance near the sleepers around the pinned-pinned resonance frequency, in combination with a large number of driven passenger wheelset passages at uniform speed. The agreement between simulations and field measurements is good with respect to dominating roughness wavelength and annual wear rate. Remedies for reducing roughness growth are discussed.

  6. Numerical predictions of hemodynamics following surgeries in cerebral aneurysms

    NASA Astrophysics Data System (ADS)

    Rayz, Vitaliy; Lawton, Michael; Boussel, Loic; Leach, Joseph; Acevedo, Gabriel; Halbach, Van; Saloner, David

    2014-11-01

    Large cerebral aneurysms present a danger of rupture or brain compression. In some cases, clinicians may attempt to change the pathological hemodynamics in order to inhibit disease progression. This can be achieved by changing the vascular geometry with an open surgery or by deploying a stent-like flow diverter device. Patient-specific CFD models can help evaluate treatment options by predicting flow regions that are likely to become occupied by thrombus (clot) following the procedure. In this study, alternative flow scenarios were modeled for several patients who underwent surgical treatment. Patient-specific geometries and flow boundary conditions were obtained from magnetic resonance angiography and velocimetry data. The Navier-Stokes equations were solved with a finite volume solver Fluent. A porous media approach was used to model flow-diverter devices. The advection-diffusion equation was solved in order to simulate contrast agent transport and the results were used to evaluate flow residence time changes. Thrombus layering was predicted in regions characterized by reduced velocities and shear stresses as well as increased flow residence time. The simulations indicated surgical options that could result in occlusion of vital arteries with thrombus. Numerical results were compared to experimental and clinical MRI data. The results demonstrate that image-based CFD models may help improve the outcome of surgeries in cerebral aneurysms. acknowledge R01HL115267.

  7. Improvement of short-term numerical wind predictions

    NASA Astrophysics Data System (ADS)

    Bedard, Joel

    Geophysic Model Output Statistics (GMOS) are developed to optimize the use of NWP for complex sites. GMOS differs from other MOS that are widely used by meteorological centers in the following aspects: it takes into account the surrounding geophysical parameters such as surface roughness, terrain height, etc., along with wind direction; it can be directly applied without any training, although training will further improve the results. The GMOS was applied to improve the Environment Canada GEM-LAM 2.5km forecasts at North Cape (PEI, Canada): It improves the predictions RMSE by 25-30% for all time horizons and almost all meteorological conditions; the topographic signature of the forecast error due to insufficient grid refinement is eliminated and the NWP combined with GMOS outperform the persistence from a 2h horizon, instead of 4h without GMOS. Finally, GMOS was applied at another site (Bouctouche, NB, Canada): similar improvements were observed, thus showing its general applicability. Keywords: wind energy, wind power forecast, numerical weather prediction, complex sites, model output statistics

  8. Predicting accurate fluorescent spectra for high molecular weight polycyclic aromatic hydrocarbons using density functional theory

    NASA Astrophysics Data System (ADS)

    Powell, Jacob; Heider, Emily C.; Campiglia, Andres; Harper, James K.

    2016-10-01

    The ability of density functional theory (DFT) methods to predict accurate fluorescence spectra for polycyclic aromatic hydrocarbons (PAHs) is explored. Two methods, PBE0 and CAM-B3LYP, are evaluated both in the gas phase and in solution. Spectra for several of the most toxic PAHs are predicted and compared to experiment, including three isomers of C24H14 and a PAH containing heteroatoms. Unusually high-resolution experimental spectra are obtained for comparison by analyzing each PAH at 4.2 K in an n-alkane matrix. All theoretical spectra visually conform to the profiles of the experimental data but are systematically offset by a small amount. Specifically, when solvent is included the PBE0 functional overestimates peaks by 16.1 ± 6.6 nm while CAM-B3LYP underestimates the same transitions by 14.5 ± 7.6 nm. These calculated spectra can be empirically corrected to decrease the uncertainties to 6.5 ± 5.1 and 5.7 ± 5.1 nm for the PBE0 and CAM-B3LYP methods, respectively. A comparison of computed spectra in the gas phase indicates that the inclusion of n-octane shifts peaks by +11 nm on average and this change is roughly equivalent for PBE0 and CAM-B3LYP. An automated approach for comparing spectra is also described that minimizes residuals between a given theoretical spectrum and all available experimental spectra. This approach identifies the correct spectrum in all cases and excludes approximately 80% of the incorrect spectra, demonstrating that an automated search of theoretical libraries of spectra may eventually become feasible.

  9. An improved snow cover scheme for high-resolution numerical weather prediction models.

    NASA Astrophysics Data System (ADS)

    Bellaire, S.; Sauter, T.; Rotach, M. W.

    2015-12-01

    Numerical weather prediction (NWP) is the core of any operational weather service. The horizontal and vertical resolution of numerical weather prediction models strongly increased during the last decades. However, numerical weather prediction in complex terrain is still challenging, because the underlying physics in the majority of subgrid-scale parameterizations have been developed for flat or idealized terrain. Weather prediction in alpine countries - such as Austria or Switzerland - is not only challenged by complex topography, furthermore, for a good part of the year the ground is snow covered influencing boundary layer processes such as turbulence and radiation. Currently, most NWP models predict the formation and evolution of the seasonal mountain snow cover in a simplified way, i.e. often a single layer model. We validated the performance of the currently implemented snow cover scheme of the COSMO model (Consortium for Small-scale Modelling) in terms of the snow surface temperature, a key parameter for the evolution of the snow cover, as well as snow height. Snow surface temperature and snow height from 120 alpine weather stations located across the Swiss Alps were compared to the corresponding COSMO output. Surface temperature was found to be overestimated especially during the night (up to 10 °C, RMSE = 6.0 °C). Snow height tends to be underestimated during the ablation phase, i.e. the COSMO model becomes snow-free too early. A new multi-layer snow module, which minimizes the energy balance equation with regard to snow surface temperature and then iteratively solves the heat equation has been implemented, predicting the daily cycle of the snow surface temperature accurately (RMSE = 1.8 °C). Furthermore, by implementing densification, melt-freeze processes and water transport snow height, especially during the ablation phase, was found to be in good agreement with the observations. Our suggested snow scheme shows promising potential not only for

  10. How accurately can we predict the melting points of drug-like compounds?

    PubMed

    Tetko, Igor V; Sushko, Yurii; Novotarskyi, Sergii; Patiny, Luc; Kondratov, Ivan; Petrenko, Alexander E; Charochkina, Larisa; Asiri, Abdullah M

    2014-12-22

    This article contributes a highly accurate model for predicting the melting points (MPs) of medicinal chemistry compounds. The model was developed using the largest published data set, comprising more than 47k compounds. The distributions of MPs in drug-like and drug lead sets showed that >90% of molecules melt within [50,250]°C. The final model calculated an RMSE of less than 33 °C for molecules from this temperature interval, which is the most important for medicinal chemistry users. This performance was achieved using a consensus model that performed calculations to a significantly higher accuracy than the individual models. We found that compounds with reactive and unstable groups were overrepresented among outlying compounds. These compounds could decompose during storage or measurement, thus introducing experimental errors. While filtering the data by removing outliers generally increased the accuracy of individual models, it did not significantly affect the results of the consensus models. Three analyzed distance to models did not allow us to flag molecules, which had MP values fell outside the applicability domain of the model. We believe that this negative result and the public availability of data from this article will encourage future studies to develop better approaches to define the applicability domain of models. The final model, MP data, and identified reactive groups are available online at http://ochem.eu/article/55638.

  11. How accurately can we predict the melting points of drug-like compounds?

    PubMed

    Tetko, Igor V; Sushko, Yurii; Novotarskyi, Sergii; Patiny, Luc; Kondratov, Ivan; Petrenko, Alexander E; Charochkina, Larisa; Asiri, Abdullah M

    2014-12-22

    This article contributes a highly accurate model for predicting the melting points (MPs) of medicinal chemistry compounds. The model was developed using the largest published data set, comprising more than 47k compounds. The distributions of MPs in drug-like and drug lead sets showed that >90% of molecules melt within [50,250]°C. The final model calculated an RMSE of less than 33 °C for molecules from this temperature interval, which is the most important for medicinal chemistry users. This performance was achieved using a consensus model that performed calculations to a significantly higher accuracy than the individual models. We found that compounds with reactive and unstable groups were overrepresented among outlying compounds. These compounds could decompose during storage or measurement, thus introducing experimental errors. While filtering the data by removing outliers generally increased the accuracy of individual models, it did not significantly affect the results of the consensus models. Three analyzed distance to models did not allow us to flag molecules, which had MP values fell outside the applicability domain of the model. We believe that this negative result and the public availability of data from this article will encourage future studies to develop better approaches to define the applicability domain of models. The final model, MP data, and identified reactive groups are available online at http://ochem.eu/article/55638. PMID:25489863

  12. A survey of factors contributing to accurate theoretical predictions of atomization energies and molecular structures

    NASA Astrophysics Data System (ADS)

    Feller, David; Peterson, Kirk A.; Dixon, David A.

    2008-11-01

    High level electronic structure predictions of thermochemical properties and molecular structure are capable of accuracy rivaling the very best experimental measurements as a result of rapid advances in hardware, software, and methodology. Despite the progress, real world limitations require practical approaches designed for handling general chemical systems that rely on composite strategies in which a single, intractable calculation is replaced by a series of smaller calculations. As typically implemented, these approaches produce a final, or "best," estimate that is constructed from one major component, fine-tuned by multiple corrections that are assumed to be additive. Though individually much smaller than the original, unmanageable computational problem, these corrections are nonetheless extremely costly. This study presents a survey of the widely varying magnitude of the most important components contributing to the atomization energies and structures of 106 small molecules. It combines large Gaussian basis sets and coupled cluster theory up to quadruple excitations for all systems. In selected cases, the effects of quintuple excitations and/or full configuration interaction were also considered. The availability of reliable experimental data for most of the molecules permits an expanded statistical analysis of the accuracy of the approach. In cases where reliable experimental information is currently unavailable, the present results are expected to provide some of the most accurate benchmark values available.

  13. Accurate prediction of band gaps and optical properties of HfO2

    NASA Astrophysics Data System (ADS)

    Ondračka, Pavel; Holec, David; Nečas, David; Zajíčková, Lenka

    2016-10-01

    We report on optical properties of various polymorphs of hafnia predicted within the framework of density functional theory. The full potential linearised augmented plane wave method was employed together with the Tran-Blaha modified Becke-Johnson potential (TB-mBJ) for exchange and local density approximation for correlation. Unit cells of monoclinic, cubic and tetragonal crystalline, and a simulated annealing-based model of amorphous hafnia were fully relaxed with respect to internal positions and lattice parameters. Electronic structures and band gaps for monoclinic, cubic, tetragonal and amorphous hafnia were calculated using three different TB-mBJ parametrisations and the results were critically compared with the available experimental and theoretical reports. Conceptual differences between a straightforward comparison of experimental measurements to a calculated band gap on the one hand and to a whole electronic structure (density of electronic states) on the other hand, were pointed out, suggesting the latter should be used whenever possible. Finally, dielectric functions were calculated at two levels, using the random phase approximation without local field effects and with a more accurate Bethe-Salpether equation (BSE) to account for excitonic effects. We conclude that a satisfactory agreement with experimental data for HfO2 was obtained only in the latter case.

  14. Accurate prediction of V1 location from cortical folds in a surface coordinate system

    PubMed Central

    Hinds, Oliver P.; Rajendran, Niranjini; Polimeni, Jonathan R.; Augustinack, Jean C.; Wiggins, Graham; Wald, Lawrence L.; Rosas, H. Diana; Potthast, Andreas; Schwartz, Eric L.; Fischl, Bruce

    2008-01-01

    Previous studies demonstrated substantial variability of the location of primary visual cortex (V1) in stereotaxic coordinates when linear volume-based registration is used to match volumetric image intensities (Amunts et al., 2000). However, other qualitative reports of V1 location (Smith, 1904; Stensaas et al., 1974; Rademacher et al., 1993) suggested a consistent relationship between V1 and the surrounding cortical folds. Here, the relationship between folds and the location of V1 is quantified using surface-based analysis to generate a probabilistic atlas of human V1. High-resolution (about 200 μm) magnetic resonance imaging (MRI) at 7 T of ex vivo human cerebral hemispheres allowed identification of the full area via the stria of Gennari: a myeloarchitectonic feature specific to V1. Separate, whole-brain scans were acquired using MRI at 1.5 T to allow segmentation and mesh reconstruction of the cortical gray matter. For each individual, V1 was manually identified in the high-resolution volume and projected onto the cortical surface. Surface-based intersubject registration (Fischl et al., 1999b) was performed to align the primary cortical folds of individual hemispheres to those of a reference template representing the average folding pattern. An atlas of V1 location was constructed by computing the probability of V1 inclusion for each cortical location in the template space. This probabilistic atlas of V1 exhibits low prediction error compared to previous V1 probabilistic atlases built in volumetric coordinates. The increased predictability observed under surface-based registration suggests that the location of V1 is more accurately predicted by the cortical folds than by the shape of the brain embedded in the volume of the skull. In addition, the high quality of this atlas provides direct evidence that surface-based intersubject registration methods are superior to volume-based methods at superimposing functional areas of cortex, and therefore are better

  15. Simplified versus geometrically accurate models of forefoot anatomy to predict plantar pressures: A finite element study.

    PubMed

    Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R

    2016-01-25

    Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in <1h compared to >3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required.

  16. Unilateral Prostate Cancer Cannot be Accurately Predicted in Low-Risk Patients

    SciTech Connect

    Isbarn, Hendrik; Karakiewicz, Pierre I.; Vogel, Susanne

    2010-07-01

    Purpose: Hemiablative therapy (HAT) is increasing in popularity for treatment of patients with low-risk prostate cancer (PCa). The validity of this therapeutic modality, which exclusively treats PCa within a single prostate lobe, rests on accurate staging. We tested the accuracy of unilaterally unremarkable biopsy findings in cases of low-risk PCa patients who are potential candidates for HAT. Methods and Materials: The study population consisted of 243 men with clinical stage {<=}T2a, a prostate-specific antigen (PSA) concentration of <10 ng/ml, a biopsy-proven Gleason sum of {<=}6, and a maximum of 2 ipsilateral positive biopsy results out of 10 or more cores. All men underwent a radical prostatectomy, and pathology stage was used as the gold standard. Univariable and multivariable logistic regression models were tested for significant predictors of unilateral, organ-confined PCa. These predictors consisted of PSA, %fPSA (defined as the quotient of free [uncomplexed] PSA divided by the total PSA), clinical stage (T2a vs. T1c), gland volume, and number of positive biopsy cores (2 vs. 1). Results: Despite unilateral stage at biopsy, bilateral or even non-organ-confined PCa was reported in 64% of all patients. In multivariable analyses, no variable could clearly and independently predict the presence of unilateral PCa. This was reflected in an overall accuracy of 58% (95% confidence interval, 50.6-65.8%). Conclusions: Two-thirds of patients with unilateral low-risk PCa, confirmed by clinical stage and biopsy findings, have bilateral or non-organ-confined PCa at radical prostatectomy. This alarming finding questions the safety and validity of HAT.

  17. Improving DOE-2's RESYS routine: User defined functions to provide more accurate part load energy use and humidity predictions

    SciTech Connect

    Henderson, Hugh I.; Parker, Danny; Huang, Yu J.

    2000-08-04

    In hourly energy simulations, it is important to properly predict the performance of air conditioning systems over a range of full and part load operating conditions. An important component of these calculations is to properly consider the performance of the cycling air conditioner and how it interacts with the building. This paper presents improved approaches to properly account for the part load performance of residential and light commercial air conditioning systems in DOE-2. First, more accurate correlations are given to predict the degradation of system efficiency at part load conditions. In addition, a user-defined function for RESYS is developed that provides improved predictions of air conditioner sensible and latent capacity at part load conditions. The user function also provides more accurate predictions of space humidity by adding ''lumped'' moisture capacitance into the calculations. The improved cooling coil model and the addition of moisture capacitance predicts humidity swings that are more representative of the performance observed in real buildings.

  18. Numerical simulation of pharyngeal airflow applied to obstructive sleep apnea: effect of the nasal cavity in anatomically accurate airway models.

    PubMed

    Cisonni, Julien; Lucey, Anthony D; King, Andrew J C; Islam, Syed Mohammed Shamsul; Lewis, Richard; Goonewardene, Mithran S

    2015-11-01

    Repetitive brief episodes of soft-tissue collapse within the upper airway during sleep characterize obstructive sleep apnea (OSA), an extremely common and disabling disorder. Failure to maintain the patency of the upper airway is caused by the combination of sleep-related loss of compensatory dilator muscle activity and aerodynamic forces promoting closure. The prediction of soft-tissue movement in patient-specific airway 3D mechanical models is emerging as a useful contribution to clinical understanding and decision making. Such modeling requires reliable estimations of the pharyngeal wall pressure forces. While nasal obstruction has been recognized as a risk factor for OSA, the need to include the nasal cavity in upper-airway models for OSA studies requires consideration, as it is most often omitted because of its complex shape. A quantitative analysis of the flow conditions generated by the nasal cavity and the sinuses during inspiration upstream of the pharynx is presented. Results show that adequate velocity boundary conditions and simple artificial extensions of the flow domain can reproduce the essential effects of the nasal cavity on the pharyngeal flow field. Therefore, the overall complexity and computational cost of accurate flow predictions can be reduced.

  19. Numerical Prediction of Magnetic Cryogenic Propellant Storage in Reduced Gravity

    NASA Astrophysics Data System (ADS)

    Marchetta, J. G.; Hochstein, J. I.

    2002-01-01

    Numerical Prediction of Magnetic Cryogenic Propellant Storage in Reduced strong evidence that a magnetic positioning system may be a feasible alternative technology for use in the management of cryogenic propellants onboard spacecraft. The results of these preliminary studies have indicated that further investigation of the physical processes and potential reliability of such a system is required. The utility of magnetic fields as an alternative method in cryogenic propellant management is dependent on its reliability and flexibility. Simulations and experiments have previously yielded evidence in support of the magnetic positive positioning (MPP) process to predictably reorient LOX for a variety of initial conditions. Presently, though, insufficient evidence has been established to support the use of magnetic fields with respect to the long-term storage of cryogenic propellants. Current modes of propellant storage have met with a moderate level of success and are well suited for short duration missions using monopropellants. However, the storage of cryogenic propellants warrants additional consideration for long-term missions. For example, propellant loss during storage is due to vaporization by incident solar radiation and the vaporized ullage must be vented to prevent excessive pressurization of the tank. Ideally, positioning the fluid in the center of the tank away from the tank wall will reduce vaporization by minimizing heat transfer through the tank wall to the liquid. A second issue involves the capability of sustaining a stable fluid configuration at tank center under varying g-levels or perturbations propellant storage. Results presented herein include comparisons illustrating the influence of gravity, fluid volume, and the magnetic field on a paramagnetic fluid, LOX. The magnetic Bond number is utilized as predictive correlating parameter for investigating these processes. A dimensionless relationship between the Bom and Bo was sought with the goal of

  20. Sub-grid Parameterization of Cumulus Vertical Velocities for Climate and Numerical Weather Prediction Models

    NASA Astrophysics Data System (ADS)

    Cooke, William; Donner, Leo

    2015-04-01

    Microphysical and aerosol processes determine the magnitude of climate forcing by aerosol-cloud interactions, are central aspects of cloud-climate feedback, and are important elements in weather systems for which accurate forecasting is a major goal of numerical weather prediction. Realistic simulation of these processes demands not only accurate microphysical and aerosol process representations but also realistic simulation of the vertical motions in which the aerosols and microphysics act. Aerosol activation, for example, is a strong function of vertical velocity. Cumulus parameterizations for climate and numerical weather prediction models have recently begun to include vertical velocities among the statistics they predict. These vertical velocities have been subject to only limited evaluation using observed vertical velocities. Deployments of multi-Doppler radars and dual-frequency profilers in recent field campaigns have substantially increased the observational base of cumulus vertical velocities, which for decades had been restricted mostly to GATE observations. Observations from TWP-ICE (Darwin, Australia) and MC3E (central United States) provide previously unavailable information on the vertical structure of cumulus vertical velocities and observations in differing synoptic contexts from those available in the past. They also provide an opportunity to independently evaluate cumulus parameterizations with vertical velocities tuned to earlier GATE observations. This presentation will compare vertical velocities observed in TWP-ICE and MC3E with cumulus vertical velocities using the parameterization in the GFDL CM3 climate model. Single-column results indicate parameterized vertical velocities are frequently greater than observed. Errors in parameterized vertical velocities exhibit similarities to vertical velocities explicitly simulated by cloud-system resolving models, and underlying issues in the treatment of microphysics may be important for both. The

  1. Downscaling surface wind predictions from numerical weather prediction models in complex terrain with WindNinja

    NASA Astrophysics Data System (ADS)

    Wagenbrenner, Natalie S.; Forthofer, Jason M.; Lamb, Brian K.; Shannon, Kyle S.; Butler, Bret W.

    2016-04-01

    Wind predictions in complex terrain are important for a number of applications. Dynamic downscaling of numerical weather prediction (NWP) model winds with a high-resolution wind model is one way to obtain a wind forecast that accounts for local terrain effects, such as wind speed-up over ridges, flow channeling in valleys, flow separation around terrain obstacles, and flows induced by local surface heating and cooling. In this paper we investigate the ability of a mass-consistent wind model for downscaling near-surface wind predictions from four NWP models in complex terrain. Model predictions are compared with surface observations from a tall, isolated mountain. Downscaling improved near-surface wind forecasts under high-wind (near-neutral atmospheric stability) conditions. Results were mixed during upslope and downslope (non-neutral atmospheric stability) flow periods, although wind direction predictions generally improved with downscaling. This work constitutes evaluation of a diagnostic wind model at unprecedented high spatial resolution in terrain with topographical ruggedness approaching that of typical landscapes in the western US susceptible to wildland fire.

  2. Numerical Weather Predictions Evaluation Using Spatial Verification Methods

    NASA Astrophysics Data System (ADS)

    Tegoulias, I.; Pytharoulis, I.; Kotsopoulos, S.; Kartsios, S.; Bampzelis, D.; Karacostas, T.

    2014-12-01

    During the last years high-resolution numerical weather prediction simulations have been used to examine meteorological events with increased convective activity. Traditional verification methods do not provide the desired level of information to evaluate those high-resolution simulations. To assess those limitations new spatial verification methods have been proposed. In the present study an attempt is made to estimate the ability of the WRF model (WRF -ARW ver3.5.1) to reproduce selected days with high convective activity during the year 2010 using those feature-based verification methods. Three model domains, covering Europe, the Mediterranean Sea and northern Africa (d01), the wider area of Greece (d02) and central Greece - Thessaly region (d03) are used at horizontal grid-spacings of 15km, 5km and 1km respectively. By alternating microphysics (Ferrier, WSM6, Goddard), boundary layer (YSU, MYJ) and cumulus convection (Kain-­-Fritsch, BMJ) schemes, a set of twelve model setups is obtained. The results of those simulations are evaluated against data obtained using a C-Band (5cm) radar located at the centre of the innermost domain. Spatial characteristics are well captured but with a variable time lag between simulation results and radar data. Acknowledgements: This research is co­financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007-­-2013).

  3. Numerical simulation and prediction of coastal ocean circulation

    SciTech Connect

    Chen, P.

    1992-01-01

    Numerical simulation and prediction of coastal ocean circulation have been conducted in three cases. 1. A process-oriented modeling study is conducted to study the interaction of a western boundary current (WBC) with coastal water, and its responses to upstream topographic irregularities. It is hypothesized that the interaction of propagating WBC frontal waves and topographic Rossby waves are responsible for upstream variability. 2. A simulation of meanders and eddies in the Norwegian Coastal Current (NCC) for February and March of 1988 is conducted with a newly developed nested dynamic interactive model. The model employs a coarse-grid, large domain to account for non-local forcing and a fine-grid nested domain to resolve meanders and eddies. The model is forced by wind stresses, heat fluxes and atmospheric pressure corresponding Feb/March of 1988, and accounts for river/fjord discharges, open ocean inflow and outflow, and M[sub 2] tides. The simulation reproduced fairly well the observed circulation, tides, and salinity features in the North Sea, Norwegian Trench and NCC region in the large domain and fairly realistic meanders and eddies in the NCC in the nested region. 3. A methodology for practical coastal ocean hindcast/forecast is developed, taking advantage of the disparate time scales of various forcing and considering wind to be the dominant factor in affecting density fluctuation in the time scale of 1 to 10 days. The density field obtained from a prognostic simulation is analyzed by the empirical orthogonal function method (EOF), and correlated with the wind; these information are then used to drive a circulation model which excludes the density calculation. The method is applied to hindcast the circulation in the New York Bight for spring and summer season of 1988. The hindcast fields compare favorably with the results obtained from the prognostic circulation model.

  4. Numerical Predictions of Wind Turbine Power and Aerodynamic Loads for the NREL Phase II and IV Combined Experiment Rotor

    NASA Technical Reports Server (NTRS)

    Duque, Earl P. N.; Johnson, Wayne; vanDam, C. P.; Chao, David D.; Cortes, Regina; Yee, Karen

    1999-01-01

    Accurate, reliable and robust numerical predictions of wind turbine rotor power remain a challenge to the wind energy industry. The literature reports various methods that compare predictions to experiments. The methods vary from Blade Element Momentum Theory (BEM), Vortex Lattice (VL), to variants of Reynolds-averaged Navier-Stokes (RaNS). The BEM and VL methods consistently show discrepancies in predicting rotor power at higher wind speeds mainly due to inadequacies with inboard stall and stall delay models. The RaNS methodologies show promise in predicting blade stall. However, inaccurate rotor vortex wake convection, boundary layer turbulence modeling and grid resolution has limited their accuracy. In addition, the inherently unsteady stalled flow conditions become computationally expensive for even the best endowed research labs. Although numerical power predictions have been compared to experiment. The availability of good wind turbine data sufficient for code validation experimental data that has been extracted from the IEA Annex XIV download site for the NREL Combined Experiment phase II and phase IV rotor. In addition, the comparisons will show data that has been further reduced into steady wind and zero yaw conditions suitable for comparisons to "steady wind" rotor power predictions. In summary, the paper will present and discuss the capabilities and limitations of the three numerical methods and make available a database of experimental data suitable to help other numerical methods practitioners validate their own work.

  5. Earthquake ground motion prediction for real sedimentary basins: which numerical schemes are applicable?

    NASA Astrophysics Data System (ADS)

    Moczo, P.; Kristek, J.; Galis, M.; Pazak, P.

    2009-12-01

    Numerical prediction of earthquake ground motion in sedimentary basins and valleys often has to account for P-wave to S-wave speed ratios (Vp/Vs) as large as 5 and even larger, mainly in sediments below groundwater level. The ratio can attain values larger than 10 in unconsolidated sediments (e.g. in Ciudad de México). In a process of developing 3D optimally-accurate finite-difference schemes we encountered a serious problem with accuracy in media with large Vp/Vs ratio. This led us to investigate the very fundamental reasons for the inaccuracy. In order to identify the very basic inherent aspects of the numerical schemes responsible for their behavior with varying Vp/Vs ratio, we restricted to the most basic 2nd-order 2D numerical schemes on a uniform grid in a homogeneous medium. Although basic in the specified sense, the schemes comprise the decisive features for accuracy of wide class of numerical schemes. We investigated 6 numerical schemes: finite-difference_displacement_conventional grid (FD_D_CG) finite-element_Lobatto integration (FE_L) finite-element_Gauss integration (FE_G) finite-difference_displacement-stress_partly-staggered grid (FD_DS_PSG) finite-difference_displacement-stress_staggered grid (FD_DS_SG) finite-difference_velocity-stress_staggered grid (FD_VS_SG) We defined and calculated local errors of the schemes in amplitude and polarization. Because different schemes use different time steps, they need different numbers of time levels to calculate solution for a desired time window. Therefore, we normalized errors for a unit time. The normalization allowed for a direct comparison of errors of different schemes. Extensive numerical calculations for wide ranges of values of the Vp/Vs ratio, spatial sampling ratio, stability ratio, and entire range of directions of propagation with respect to the spatial grid led to interesting and surprising findings. Accuracy of FD_D_CG, FE_L and FE_G strongly depends on Vp/Vs ratio. The schemes are not

  6. Numerical Prediction Of Elastic Springback In An Automotive Complex Structural Part

    NASA Astrophysics Data System (ADS)

    Fratini, Livan; Ingarao, Giuseppe; Micari, Fabrizio; Lo Franco, Andrea

    2007-05-01

    The routing and production of 3D complex parts for automotive applications is characterized by springback phenomena affecting the final geometry of the components both after the stamping operations and the trimming ones. FE analyses have to assure effectiveness and consistency in order to be utilized as design tool to be coupled to proper compensating techniques allowing to obtain the desired geometry at the and of the production sequence. In the present paper the full routing of a DP 600 steel automotive structural part is considered and the springback phenomena occurring after forming and trimming are investigated through FE analyses utilizing two different commercial codes. Althought finite element analysis is successful in simulating industrial sheet forming operations, the accurate and reliable applications of this phenomenon and its numerical prediction has not been widely demonstrated. In this paper the influence of the main numerical parameters has been considered i.e. type of the utilized shell element and number of integration points along the thickness, with the aim to improve the effectiveness and reliability of the numerical results. The obtained results have been compared with the experimental evidences derived from CMM acquisitions.

  7. Accurate prediction model of bead geometry in crimping butt of the laser brazing using generalized regression neural network

    NASA Astrophysics Data System (ADS)

    Rong, Y. M.; Chang, Y.; Huang, Y.; Zhang, G. J.; Shao, X. Y.

    2015-12-01

    There are few researches that concentrate on the prediction of the bead geometry for laser brazing with crimping butt. This paper addressed the accurate prediction of the bead profile by developing a generalized regression neural network (GRNN) algorithm. Firstly GRNN model was developed and trained to decrease the prediction error that may be influenced by the sample size. Then the prediction accuracy was demonstrated by comparing with other articles and back propagation artificial neural network (BPNN) algorithm. Eventually the reliability and stability of GRNN model were discussed from the points of average relative error (ARE), mean square error (MSE) and root mean square error (RMSE), while the maximum ARE and MSE were 6.94% and 0.0303 that were clearly less than those (14.28% and 0.0832) predicted by BPNN. Obviously, it was proved that the prediction accuracy was improved at least 2 times, and the stability was also increased much more.

  8. Coupling a Mesoscale Numerical Weather Prediction Model with Large-Eddy Simulation for Realistic Wind Plant Aerodynamics Simulations (Poster)

    SciTech Connect

    Draxl, C.; Churchfield, M.; Mirocha, J.; Lee, S.; Lundquist, J.; Michalakes, J.; Moriarty, P.; Purkayastha, A.; Sprague, M.; Vanderwende, B.

    2014-06-01

    Wind plant aerodynamics are influenced by a combination of microscale and mesoscale phenomena. Incorporating mesoscale atmospheric forcing (e.g., diurnal cycles and frontal passages) into wind plant simulations can lead to a more accurate representation of microscale flows, aerodynamics, and wind turbine/plant performance. Our goal is to couple a numerical weather prediction model that can represent mesoscale flow [specifically the Weather Research and Forecasting model] with a microscale LES model (OpenFOAM) that can predict microscale turbulence and wake losses.

  9. Combining Evolutionary Information and an Iterative Sampling Strategy for Accurate Protein Structure Prediction.

    PubMed

    Braun, Tatjana; Koehler Leman, Julia; Lange, Oliver F

    2015-12-01

    Recent work has shown that the accuracy of ab initio structure prediction can be significantly improved by integrating evolutionary information in form of intra-protein residue-residue contacts. Following this seminal result, much effort is put into the improvement of contact predictions. However, there is also a substantial need to develop structure prediction protocols tailored to the type of restraints gained by contact predictions. Here, we present a structure prediction protocol that combines evolutionary information with the resolution-adapted structural recombination approach of Rosetta, called RASREC. Compared to the classic Rosetta ab initio protocol, RASREC achieves improved sampling, better convergence and higher robustness against incorrect distance restraints, making it the ideal sampling strategy for the stated problem. To demonstrate the accuracy of our protocol, we tested the approach on a diverse set of 28 globular proteins. Our method is able to converge for 26 out of the 28 targets and improves the average TM-score of the entire benchmark set from 0.55 to 0.72 when compared to the top ranked models obtained by the EVFold web server using identical contact predictions. Using a smaller benchmark, we furthermore show that the prediction accuracy of our method is only slightly reduced when the contact prediction accuracy is comparatively low. This observation is of special interest for protein sequences that only have a limited number of homologs.

  10. A machine learning approach to the accurate prediction of multi-leaf collimator positional errors

    NASA Astrophysics Data System (ADS)

    Carlson, Joel N. K.; Park, Jong Min; Park, So-Yeon; In Park, Jong; Choi, Yunseok; Ye, Sung-Joon

    2016-03-01

    Discrepancies between planned and delivered movements of multi-leaf collimators (MLCs) are an important source of errors in dose distributions during radiotherapy. In this work we used machine learning techniques to train models to predict these discrepancies, assessed the accuracy of the model predictions, and examined the impact these errors have on quality assurance (QA) procedures and dosimetry. Predictive leaf motion parameters for the models were calculated from the plan files, such as leaf position and velocity, whether the leaf was moving towards or away from the isocenter of the MLC, and many others. Differences in positions between synchronized DICOM-RT planning files and DynaLog files reported during QA delivery were used as a target response for training of the models. The final model is capable of predicting MLC positions during delivery to a high degree of accuracy. For moving MLC leaves, predicted positions were shown to be significantly closer to delivered positions than were planned positions. By incorporating predicted positions into dose calculations in the TPS, increases were shown in gamma passing rates against measured dose distributions recorded during QA delivery. For instance, head and neck plans with 1%/2 mm gamma criteria had an average increase in passing rate of 4.17% (SD  =  1.54%). This indicates that the inclusion of predictions during dose calculation leads to a more realistic representation of plan delivery. To assess impact on the patient, dose volumetric histograms (DVH) using delivered positions were calculated for comparison with planned and predicted DVHs. In all cases, predicted dose volumetric parameters were in closer agreement to the delivered parameters than were the planned parameters, particularly for organs at risk on the periphery of the treatment area. By incorporating the predicted positions into the TPS, the treatment planner is given a more realistic view of the dose distribution as it will truly be

  11. Industrial Compositional Streamline Simulation for Efficient and Accurate Prediction of Gas Injection and WAG Processes

    SciTech Connect

    Margot Gerritsen

    2008-10-31

    Gas-injection processes are widely and increasingly used for enhanced oil recovery (EOR). In the United States, for example, EOR production by gas injection accounts for approximately 45% of total EOR production and has tripled since 1986. The understanding of the multiphase, multicomponent flow taking place in any displacement process is essential for successful design of gas-injection projects. Due to complex reservoir geometry, reservoir fluid properties and phase behavior, the design of accurate and efficient numerical simulations for the multiphase, multicomponent flow governing these processes is nontrivial. In this work, we developed, implemented and tested a streamline based solver for gas injection processes that is computationally very attractive: as compared to traditional Eulerian solvers in use by industry it computes solutions with a computational speed orders of magnitude higher and a comparable accuracy provided that cross-flow effects do not dominate. We contributed to the development of compositional streamline solvers in three significant ways: improvement of the overall framework allowing improved streamline coverage and partial streamline tracing, amongst others; parallelization of the streamline code, which significantly improves wall clock time; and development of new compositional solvers that can be implemented along streamlines as well as in existing Eulerian codes used by industry. We designed several novel ideas in the streamline framework. First, we developed an adaptive streamline coverage algorithm. Adding streamlines locally can reduce computational costs by concentrating computational efforts where needed, and reduce mapping errors. Adapting streamline coverage effectively controls mass balance errors that mostly result from the mapping from streamlines to pressure grid. We also introduced the concept of partial streamlines: streamlines that do not necessarily start and/or end at wells. This allows more efficient coverage and avoids

  12. How accurately can subject-specific finite element models predict strains and strength of human femora? Investigation using full-field measurements.

    PubMed

    Grassi, Lorenzo; Väänänen, Sami P; Ristinmaa, Matti; Jurvelin, Jukka S; Isaksson, Hanna

    2016-03-21

    Subject-specific finite element models have been proposed as a tool to improve fracture risk assessment in individuals. A thorough laboratory validation against experimental data is required before introducing such models in clinical practice. Results from digital image correlation can provide full-field strain distribution over the specimen surface during in vitro test, instead of at a few pre-defined locations as with strain gauges. The aim of this study was to validate finite element models of human femora against experimental data from three cadaver femora, both in terms of femoral strength and of the full-field strain distribution collected with digital image correlation. The results showed a high accuracy between predicted and measured principal strains (R(2)=0.93, RMSE=10%, 1600 validated data points per specimen). Femoral strength was predicted using a rate dependent material model with specific strain limit values for yield and failure. This provided an accurate prediction (<2% error) for two out of three specimens. In the third specimen, an accidental change in the boundary conditions occurred during the experiment, which compromised the femoral strength validation. The achieved strain accuracy was comparable to that obtained in state-of-the-art studies which validated their prediction accuracy against 10-16 strain gauge measurements. Fracture force was accurately predicted, with the predicted failure location being very close to the experimental fracture rim. Despite the low sample size and the single loading condition tested, the present combined numerical-experimental method showed that finite element models can predict femoral strength by providing a thorough description of the local bone mechanical response. PMID:26944687

  13. Forecasting irrigation demand by assimilating satellite images and numerical weather predictions

    NASA Astrophysics Data System (ADS)

    Pelosi, Anna; Medina, Hanoi; Villani, Paolo; Falanga Bolognesi, Salvatore; D'Urso, Guido; Battista Chirico, Giovanni

    2016-04-01

    Forecasting irrigation water demand, with small predictive uncertainty in the short-medium term, is fundamental for an efficient planning of water resource allocation among multiple users and for decreasing water and energy consumptions. In this study we present an innovative system for forecasting irrigation water demand, applicable at different spatial scales: from the farm level to the irrigation district level. The forecast system is centred on a crop growth model assimilating data from satellite images and numerical weather forecasts, according to a stochastic ensemble-based approach. Different sources of uncertainty affecting model predictions are represented by an ensemble of model trajectories, each generated by a possible realization of the model components (model parameters, input weather data and model state variables). The crop growth model is based on a set of simplified analytical relations, with the aim to assess biomass, leaf area index (LAI) growth and evapotranspiration rate with a daily time step. Within the crop growth model, LAI dynamics is let be governed by temperature and leaf dry matter supply, according to the development stage of the crop. The model assimilates LAI data retrieved from VIS-NIR high-resolution multispectral satellite images. Numerical weather model outputs are those from the European limited area ensemble prediction system (COSMO-LEPS), which provides forecasts up to five days with a spatial resolution of seven kilometres. Weather forecasts are sequentially bias corrected based on data from ground weather stations. The forecasting system is evaluated in experimental areas of southern Italy during three irrigation seasons. The performance analysis shows very accurate irrigation water demand forecasts, which make the proposed system a valuable support for water planning and saving at farm level as well as for water management at larger spatial scales.

  14. Session on techniques and resources for storm-scale numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Droegemeier, Kelvin

    1993-01-01

    The session on techniques and resources for storm-scale numerical weather prediction are reviewed. The recommendations of this group are broken down into three area: modeling and prediction, data requirements in support of modeling and prediction, and data management. The current status, modeling and technological recommendations, data requirements in support of modeling and prediction, and data management are addressed.

  15. A time accurate prediction of the viscous flow in a turbine stage including a rotor in motion

    NASA Astrophysics Data System (ADS)

    Shavalikul, Akamol

    In this current study, the flow field in the Pennsylvania State University Axial Flow Turbine Research Facility (AFTRF) was simulated. This study examined four sets of simulations. The first two sets are for an individual NGV and for an individual rotor. The last two sets use a multiple reference frames approach for a complete turbine stage with two different interface models: a steady circumferential average approach called a mixing plane model, and a time accurate flow simulation approach called a sliding mesh model. The NGV passage flow field was simulated using a three-dimensional Reynolds Averaged Navier-Stokes finite volume solver (RANS) with a standard kappa -- epsilon turbulence model. The mean flow distributions on the NGV surfaces and endwall surfaces were computed. The numerical solutions indicate that two passage vortices begin to be observed approximately at the mid axial chord of the NGV suction surface. The first vortex is a casing passage vortex which occurs at the corner formed by the NGV suction surface and the casing. This vortex is created by the interaction of the passage flow and the radially inward flow, while the second vortex, the hub passage vortex, is observed near the hub. These two vortices become stronger towards the NGV trailing edge. By comparing the results from the X/Cx = 1.025 plane and the X/Cx = 1.09 plane, it can be concluded that the NGV wake decays rapidly within a short axial distance downstream of the NGV. For the rotor, a set of simulations was carried out to examine the flow fields associated with different pressure side tip extension configurations, which are designed to reduce the tip leakage flow. The simulation results show that significant reductions in tip leakage mass flow rate and aerodynamic loss reduction are possible by using suitable tip platform extensions located near the pressure side corner of the blade tip. The computations used realistic turbine rotor inlet flow conditions in a linear cascade arrangement

  16. Multi-omics integration accurately predicts cellular state in unexplored conditions for Escherichia coli

    PubMed Central

    Kim, Minseung; Rai, Navneet; Zorraquino, Violeta; Tagkopoulos, Ilias

    2016-01-01

    A significant obstacle in training predictive cell models is the lack of integrated data sources. We develop semi-supervised normalization pipelines and perform experimental characterization (growth, transcriptional, proteome) to create Ecomics, a consistent, quality-controlled multi-omics compendium for Escherichia coli with cohesive meta-data information. We then use this resource to train a multi-scale model that integrates four omics layers to predict genome-wide concentrations and growth dynamics. The genetic and environmental ontology reconstructed from the omics data is substantially different and complementary to the genetic and chemical ontologies. The integration of different layers confers an incremental increase in the prediction performance, as does the information about the known gene regulatory and protein-protein interactions. The predictive performance of the model ranges from 0.54 to 0.87 for the various omics layers, which far exceeds various baselines. This work provides an integrative framework of omics-driven predictive modelling that is broadly applicable to guide biological discovery. PMID:27713404

  17. A survey of numerical models for wind prediction

    NASA Technical Reports Server (NTRS)

    Schonfeld, D.

    1980-01-01

    A literature review is presented of the work done in the numerical modeling of wind flows. Pertinent computational techniques are described, as well as the necessary assumptions used to simplify the governing equations. A steady state model is outlined, based on the data obtained at the Deep Space Communications complex at Goldstone, California.

  18. Empirical approaches to more accurately predict benthic-pelagic coupling in biogeochemical ocean models

    NASA Astrophysics Data System (ADS)

    Dale, Andy; Stolpovsky, Konstantin; Wallmann, Klaus

    2016-04-01

    The recycling and burial of biogenic material in the sea floor plays a key role in the regulation of ocean chemistry. Proper consideration of these processes in ocean biogeochemical models is becoming increasingly recognized as an important step in model validation and prediction. However, the rate of organic matter remineralization in sediments and the benthic flux of redox-sensitive elements are difficult to predict a priori. In this communication, examples of empirical benthic flux models that can be coupled to earth system models to predict sediment-water exchange in the open ocean are presented. Large uncertainties hindering further progress in this field include knowledge of the reactivity of organic carbon reaching the sediment, the importance of episodic variability in bottom water chemistry and particle rain rates (for both the deep-sea and margins) and the role of benthic fauna. How do we meet the challenge?

  19. An endometrial gene expression signature accurately predicts recurrent implantation failure after IVF

    PubMed Central

    Koot, Yvonne E. M.; van Hooff, Sander R.; Boomsma, Carolien M.; van Leenen, Dik; Groot Koerkamp, Marian J. A.; Goddijn, Mariëtte; Eijkemans, Marinus J. C.; Fauser, Bart C. J. M.; Holstege, Frank C. P.; Macklon, Nick S.

    2016-01-01

    The primary limiting factor for effective IVF treatment is successful embryo implantation. Recurrent implantation failure (RIF) is a condition whereby couples fail to achieve pregnancy despite consecutive embryo transfers. Here we describe the collection of gene expression profiles from mid-luteal phase endometrial biopsies (n = 115) from women experiencing RIF and healthy controls. Using a signature discovery set (n = 81) we identify a signature containing 303 genes predictive of RIF. Independent validation in 34 samples shows that the gene signature predicts RIF with 100% positive predictive value (PPV). The strength of the RIF associated expression signature also stratifies RIF patients into distinct groups with different subsequent implantation success rates. Exploration of the expression changes suggests that RIF is primarily associated with reduced cellular proliferation. The gene signature will be of value in counselling and guiding further treatment of women who fail to conceive upon IVF and suggests new avenues for developing intervention. PMID:26797113

  20. Complex hybrid models combining deterministic and machine learning components for numerical climate modeling and weather prediction.

    PubMed

    Krasnopolsky, Vladimir M; Fox-Rabinovitz, Michael S

    2006-03-01

    A new practical application of neural network (NN) techniques to environmental numerical modeling has been developed. Namely, a new type of numerical model, a complex hybrid environmental model based on a synergetic combination of deterministic and machine learning model components, has been introduced. Conceptual and practical possibilities of developing hybrid models are discussed in this paper for applications to climate modeling and weather prediction. The approach presented here uses NN as a statistical or machine learning technique to develop highly accurate and fast emulations for time consuming model physics components (model physics parameterizations). The NN emulations of the most time consuming model physics components, short and long wave radiation parameterizations or full model radiation, presented in this paper are combined with the remaining deterministic components (like model dynamics) of the original complex environmental model--a general circulation model or global climate model (GCM)--to constitute a hybrid GCM (HGCM). The parallel GCM and HGCM simulations produce very similar results but HGCM is significantly faster. The speed-up of model calculations opens the opportunity for model improvement. Examples of developed HGCMs illustrate the feasibility and efficiency of the new approach for modeling complex multidimensional interdisciplinary systems.

  1. Accurate ab initio prediction of NMR chemical shifts of nucleic acids and nucleic acids/protein complexes

    PubMed Central

    Victora, Andrea; Möller, Heiko M.; Exner, Thomas E.

    2014-01-01

    NMR chemical shift predictions based on empirical methods are nowadays indispensable tools during resonance assignment and 3D structure calculation of proteins. However, owing to the very limited statistical data basis, such methods are still in their infancy in the field of nucleic acids, especially when non-canonical structures and nucleic acid complexes are considered. Here, we present an ab initio approach for predicting proton chemical shifts of arbitrary nucleic acid structures based on state-of-the-art fragment-based quantum chemical calculations. We tested our prediction method on a diverse set of nucleic acid structures including double-stranded DNA, hairpins, DNA/protein complexes and chemically-modified DNA. Overall, our quantum chemical calculations yield highly/very accurate predictions with mean absolute deviations of 0.3–0.6 ppm and correlation coefficients (r2) usually above 0.9. This will allow for identifying misassignments and validating 3D structures. Furthermore, our calculations reveal that chemical shifts of protons involved in hydrogen bonding are predicted significantly less accurately. This is in part caused by insufficient inclusion of solvation effects. However, it also points toward shortcomings of current force fields used for structure determination of nucleic acids. Our quantum chemical calculations could therefore provide input for force field optimization. PMID:25404135

  2. Change in body mass accurately and reliably predicts change in body water after endurance exercise.

    PubMed

    Baker, Lindsay B; Lang, James A; Kenney, W Larry

    2009-04-01

    This study tested the hypothesis that the change in body mass (DeltaBM) accurately reflects the change in total body water (DeltaTBW) after prolonged exercise. Subjects (4 men, 4 women; 22-36 year; 66 +/- 10 kg) completed 2 h of interval running (70% VO(2max)) in the heat (30 degrees C), followed by a run to exhaustion (85% VO(2max)), and then sat for a 1 h recovery period. During exercise and recovery, subjects drank fluid or no fluid to maintain their BM, increase BM by 2%, or decrease BM by 2 or 4% in separate trials. Pre- and post-experiment TBW were determined using the deuterium oxide (D(2)O) dilution technique and corrected for D(2)O lost in urine, sweat, breath vapor, and nonaqueous hydrogen exchange. The average difference between DeltaBM and DeltaTBW was 0.07 +/- 1.07 kg (paired t test, P = 0.29). The slope and intercept of the relation between DeltaBM and DeltaTBW were not significantly different from 1 and 0, respectively. The intraclass correlation coefficient between DeltaBM and DeltaTBW was 0.76, which is indicative of excellent reliability between methods. Measuring pre- to post-exercise DeltaBM is an accurate and reliable method to assess the DeltaTBW.

  3. Towards Accurate Residue-Residue Hydrophobic Contact Prediction for Alpha Helical Proteins Via Integer Linear Optimization

    PubMed Central

    Rajgaria, R.; McAllister, S. R.; Floudas, C. A.

    2008-01-01

    A new optimization-based method is presented to predict the hydrophobic residue contacts in α-helical proteins. The proposed approach uses a high resolution distance dependent force field to calculate the interaction energy between different residues of a protein. The formulation predicts the hydrophobic contacts by minimizing the sum of these contact energies. These residue contacts are highly useful in narrowing down the conformational space searched by protein structure prediction algorithms. The proposed algorithm also offers the algorithmic advantage of producing a rank ordered list of the best contact sets. This model was tested on four independent α-helical protein test sets and was found to perform very well. The average accuracy of the predictions (separated by at least six residues) obtained using the presented method was approximately 66% for single domain proteins. The average true positive and false positive distances were also calculated for each protein test set and they are 8.87 Å and 14.67 Å respectively. PMID:18767158

  4. Accurate prediction of kidney allograft outcome based on creatinine course in the first 6 months posttransplant.

    PubMed

    Fritsche, L; Hoerstrup, J; Budde, K; Reinke, P; Neumayer, H-H; Frei, U; Schlaefer, A

    2005-03-01

    Most attempts to predict early kidney allograft loss are based on the patient and donor characteristics at baseline. We investigated how the early posttransplant creatinine course compares to baseline information in the prediction of kidney graft failure within the first 4 years after transplantation. Two approaches to create a prediction rule for early graft failure were evaluated. First, the whole data set was analysed using a decision-tree building software. The software, rpart, builds classification or regression models; the resulting models can be represented as binary trees. In the second approach, a Hill-Climbing algorithm was applied to define cut-off values for the median creatinine level and creatinine slope in the period between day 60 and 180 after transplantation. Of the 497 patients available for analysis, 52 (10.5%) experienced an early graft loss (graft loss within the first 4 years after transplantation). From the rpart algorithm, a single decision criterion emerged: Median creatinine value on days 60 to 180 higher than 3.1 mg/dL predicts early graft failure (accuracy 95.2% but sensitivity = 42.3%). In contrast, the Hill-Climbing algorithm delivered a cut-off of 1.8 mg/dL for the median creatinine level and a cut-off of 0.3 mg/dL per month for the creatinine slope (sensitivity = 69.5% and specificity 79.0%). Prediction rules based on median and slope of creatinine levels in the first half year after transplantation allow early identification of patients who are at risk of loosing their graft early after transplantation. These patients may benefit from therapeutic measures tailored for this high-risk setting. PMID:15848516

  5. Accurate Prediction of Transposon-Derived piRNAs by Integrating Various Sequential and Physicochemical Features

    PubMed Central

    Luo, Longqiang; Li, Dingfang; Zhang, Wen; Tu, Shikui; Zhu, Xiaopeng; Tian, Gang

    2016-01-01

    Background Piwi-interacting RNA (piRNA) is the largest class of small non-coding RNA molecules. The transposon-derived piRNA prediction can enrich the research contents of small ncRNAs as well as help to further understand generation mechanism of gamete. Methods In this paper, we attempt to differentiate transposon-derived piRNAs from non-piRNAs based on their sequential and physicochemical features by using machine learning methods. We explore six sequence-derived features, i.e. spectrum profile, mismatch profile, subsequence profile, position-specific scoring matrix, pseudo dinucleotide composition and local structure-sequence triplet elements, and systematically evaluate their performances for transposon-derived piRNA prediction. Finally, we consider two approaches: direct combination and ensemble learning to integrate useful features and achieve high-accuracy prediction models. Results We construct three datasets, covering three species: Human, Mouse and Drosophila, and evaluate the performances of prediction models by 10-fold cross validation. In the computational experiments, direct combination models achieve AUC of 0.917, 0.922 and 0.992 on Human, Mouse and Drosophila, respectively; ensemble learning models achieve AUC of 0.922, 0.926 and 0.994 on the three datasets. Conclusions Compared with other state-of-the-art methods, our methods can lead to better performances. In conclusion, the proposed methods are promising for the transposon-derived piRNA prediction. The source codes and datasets are available in S1 File. PMID:27074043

  6. Comparison of Experimental Diagnostic Signals with Numerical Predictions

    NASA Astrophysics Data System (ADS)

    Comer, K.; Turnbull, A. D.

    1997-11-01

    A new code has been written to compare experimental diagnostic signals with those predicted by stability code output and experimental equilibrium diagnostic signals such as SXR, ECE, BSE, and reflectometry. Comparison of expected and actual diagnostic signals will help distinguish or identify modes by the signals they produce, and will also help validate stability codes. Predicted diagnostic signals are obtained by taking the total time derivative of S, the signal amplitude, and assuming steady state conditions so that the partial time derivative can be set to zero. Multiplying by delta-time (Dt) results in δ S = tilde\\underlineξ \\cdot \\underlinenablaS, where δ S is the predicted diagnostic signal, tilde\\underlineξ is the plasma displacement predicted by various equilibrium codes (such as GATO or MARS), and \\underlinenablaS is the gradient of the equilibrium diagnostic signal. \\underlinenablaS may be obtained from an experimental equilibrium signal amplitude profile, or from a functional dependence of the signal amplitude on equilibrium temperature and density. Comparisons of predicted and actual signals from linear ideal and resistive codes show reasonable agreement with the measured signals in some cases, but there are also some significant discrepancies.

  7. Sound absorption of porous substrates covered by foliage: experimental results and numerical predictions.

    PubMed

    Ding, Lei; Van Renterghem, Timothy; Botteldooren, Dick; Horoshenkov, Kirill; Khan, Amir

    2013-12-01

    The influence of loose plant leaves on the acoustic absorption of a porous substrate is experimentally and numerically studied. Such systems are typical in vegetative walls, where the substrate has strong acoustical absorbing properties. Both experiments in an impedance tube and theoretical predictions show that when a leaf is placed in front of such a porous substrate, its absorption characteristics markedly change (for normal incident sound). Typically, there is an unaffected change in the low frequency absorption coefficient (below 250 Hz), an increase in the middle frequency absorption coefficient (500-2000 Hz) and a decrease in the absorption at higher frequencies. The influence of leaves becomes most pronounced when the substrate has a low mass density. A combination of the Biot's elastic frame porous model, viscous damping in the leaf boundary layers and plate vibration theory is implemented via a finite-difference time-domain model, which is able to predict accurately the absorption spectrum of a leaf above a porous substrate system. The change in the absorption spectrum caused by the leaf vibration can be modeled reasonably well assuming the leaf and porous substrate properties are uniform.

  8. Condensation as a microclimate process: measurement, numerical simulation and prediction in the Glowworm Cave, New Zealand

    NASA Astrophysics Data System (ADS)

    de Freitas, C. R.; Schmekal, A.

    2003-04-01

    The study examines condensation as a microclimate process. It focuses first on finding a reliable method for measuring condensation and then on testing a numerical model for predicting condensation rates. The study site is the Glowworm Cave, a heavily used tourist cave in New Zealand. Preservation of the cave and its management as a sustainable tourist resource are high priorities. Here, as in other caves, condensation in carbon dioxide enriched air can lead to corrosion of calcite features. Specially constructed electronic sensors for measuring on-going condensation, as well as evaporation of the condensate, are tested. Measurements of condensation made over a year are used to test a physical model of condensation in the cave defined as a function of the vapour gradient between the cave air and condensation surface and a convection transfer coefficient. The results show that the amount and rate of condensation can be accurately measured and predicted. Air exchange with the outside can increase or decrease condensation rates, but the results show that the convection transfer coefficient remains constant. Temporal patterns of condensation in the cave are identified, as well as factors that influence these. Short-term and longer-term temporal variations of condensation rates are observed and patterns explained. Seasonal changes are large, with higher condensation rates occurring in the warmer months and lower rates during the cooler months. It is shown that controlling air exchange between the cave and the outside can influence condensation. This and other aspects of cave management are discussed.

  9. Accurate, conformation-dependent predictions of solvent effects on protein ionization constants

    PubMed Central

    Barth, P.; Alber, T.; Harbury, P. B.

    2007-01-01

    Predicting how aqueous solvent modulates the conformational transitions and influences the pKa values that regulate the biological functions of biomolecules remains an unsolved challenge. To address this problem, we developed FDPB_MF, a rotamer repacking method that exhaustively samples side chain conformational space and rigorously calculates multibody protein–solvent interactions. FDPB_MF predicts the effects on pKa values of various solvent exposures, large ionic strength variations, strong energetic couplings, structural reorganizations and sequence mutations. The method achieves high accuracy, with root mean square deviations within 0.3 pH unit of the experimental values measured for turkey ovomucoid third domain, hen lysozyme, Bacillus circulans xylanase, and human and Escherichia coli thioredoxins. FDPB_MF provides a faithful, quantitative assessment of electrostatic interactions in biological macromolecules. PMID:17360348

  10. FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues

    PubMed Central

    EL-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant

    2016-01-01

    A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein

  11. FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues.

    PubMed

    El-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant

    2016-01-01

    A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein

  12. Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data

    PubMed Central

    Pagán, Josué; Irene De Orbe, M.; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L.; Vivancos Mora, J.; Moya, José M.; Ayala, José L.

    2015-01-01

    Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103

  13. Revisiting the blind tests in crystal structure prediction: accurate energy ranking of molecular crystals.

    PubMed

    Asmadi, Aldi; Neumann, Marcus A; Kendrick, John; Girard, Pascale; Perrin, Marc-Antoine; Leusen, Frank J J

    2009-12-24

    In the 2007 blind test of crystal structure prediction hosted by the Cambridge Crystallographic Data Centre (CCDC), a hybrid DFT/MM method correctly ranked each of the four experimental structures as having the lowest lattice energy of all the crystal structures predicted for each molecule. The work presented here further validates this hybrid method by optimizing the crystal structures (experimental and submitted) of the first three CCDC blind tests held in 1999, 2001, and 2004. Except for the crystal structures of compound IX, all structures were reminimized and ranked according to their lattice energies. The hybrid method computes the lattice energy of a crystal structure as the sum of the DFT total energy and a van der Waals (dispersion) energy correction. Considering all four blind tests, the crystal structure with the lowest lattice energy corresponds to the experimentally observed structure for 12 out of 14 molecules. Moreover, good geometrical agreement is observed between the structures determined by the hybrid method and those measured experimentally. In comparison with the correct submissions made by the blind test participants, all hybrid optimized crystal structures (apart from compound II) have the smallest calculated root mean squared deviations from the experimentally observed structures. It is predicted that a new polymorph of compound V exists under pressure.

  14. Accurate structure prediction of peptide–MHC complexes for identifying highly immunogenic antigens

    SciTech Connect

    Park, Min-Sun; Park, Sung Yong; Miller, Keith R.; Collins, Edward J.; Lee, Ha Youn

    2013-11-01

    Designing an optimal HIV-1 vaccine faces the challenge of identifying antigens that induce a broad immune capacity. One factor to control the breadth of T cell responses is the surface morphology of a peptide–MHC complex. Here, we present an in silico protocol for predicting peptide–MHC structure. A robust signature of a conformational transition was identified during all-atom molecular dynamics, which results in a model with high accuracy. A large test set was used in constructing our protocol and we went another step further using a blind test with a wild-type peptide and two highly immunogenic mutants, which predicted substantial conformational changes in both mutants. The center residues at position five of the analogs were configured to be accessible to solvent, forming a prominent surface, while the residue of the wild-type peptide was to point laterally toward the side of the binding cleft. We then experimentally determined the structures of the blind test set, using high resolution of X-ray crystallography, which verified predicted conformational changes. Our observation strongly supports a positive association of the surface morphology of a peptide–MHC complex to its immunogenicity. Our study offers the prospect of enhancing immunogenicity of vaccines by identifying MHC binding immunogens.

  15. Revisiting the blind tests in crystal structure prediction: accurate energy ranking of molecular crystals.

    PubMed

    Asmadi, Aldi; Neumann, Marcus A; Kendrick, John; Girard, Pascale; Perrin, Marc-Antoine; Leusen, Frank J J

    2009-12-24

    In the 2007 blind test of crystal structure prediction hosted by the Cambridge Crystallographic Data Centre (CCDC), a hybrid DFT/MM method correctly ranked each of the four experimental structures as having the lowest lattice energy of all the crystal structures predicted for each molecule. The work presented here further validates this hybrid method by optimizing the crystal structures (experimental and submitted) of the first three CCDC blind tests held in 1999, 2001, and 2004. Except for the crystal structures of compound IX, all structures were reminimized and ranked according to their lattice energies. The hybrid method computes the lattice energy of a crystal structure as the sum of the DFT total energy and a van der Waals (dispersion) energy correction. Considering all four blind tests, the crystal structure with the lowest lattice energy corresponds to the experimentally observed structure for 12 out of 14 molecules. Moreover, good geometrical agreement is observed between the structures determined by the hybrid method and those measured experimentally. In comparison with the correct submissions made by the blind test participants, all hybrid optimized crystal structures (apart from compound II) have the smallest calculated root mean squared deviations from the experimentally observed structures. It is predicted that a new polymorph of compound V exists under pressure. PMID:19950907

  16. HAAD: A quick algorithm for accurate prediction of hydrogen atoms in protein structures.

    PubMed

    Li, Yunqi; Roy, Ambrish; Zhang, Yang

    2009-08-20

    Hydrogen constitutes nearly half of all atoms in proteins and their positions are essential for analyzing hydrogen-bonding interactions and refining atomic-level structures. However, most protein structures determined by experiments or computer prediction lack hydrogen coordinates. We present a new algorithm, HAAD, to predict the positions of hydrogen atoms based on the positions of heavy atoms. The algorithm is built on the basic rules of orbital hybridization followed by the optimization of steric repulsion and electrostatic interactions. We tested the algorithm using three independent data sets: ultra-high-resolution X-ray structures, structures determined by neutron diffraction, and NOE proton-proton distances. Compared with the widely used programs CHARMM and REDUCE, HAAD has a significantly higher accuracy, with the average RMSD of the predicted hydrogen atoms to the X-ray and neutron diffraction structures decreased by 26% and 11%, respectively. Furthermore, hydrogen atoms placed by HAAD have more matches with the NOE restraints and fewer clashes with heavy atoms. The average CPU cost by HAAD is 18 and 8 times lower than that of CHARMM and REDUCE, respectively. The significant advantage of HAAD in both the accuracy and the speed of the hydrogen additions should make HAAD a useful tool for the detailed study of protein structure and function. Both an executable and the source code of HAAD are freely available at http://zhang.bioinformatics.ku.edu/HAAD.

  17. Accurate single-sequence prediction of solvent accessible surface area using local and global features.

    PubMed

    Faraggi, Eshel; Zhou, Yaoqi; Kloczkowski, Andrzej

    2014-11-01

    We present a new approach for predicting the Accessible Surface Area (ASA) using a General Neural Network (GENN). The novelty of the new approach lies in not using residue mutation profiles generated by multiple sequence alignments as descriptive inputs. Instead we use solely sequential window information and global features such as single-residue and two-residue compositions of the chain. The resulting predictor is both highly more efficient than sequence alignment-based predictors and of comparable accuracy to them. Introduction of the global inputs significantly helps achieve this comparable accuracy. The predictor, termed ASAquick, is tested on predicting the ASA of globular proteins and found to perform similarly well for so-called easy and hard cases indicating generalizability and possible usability for de-novo protein structure prediction. The source code and a Linux executables for GENN and ASAquick are available from Research and Information Systems at http://mamiris.com, from the SPARKS Lab at http://sparks-lab.org, and from the Battelle Center for Mathematical Medicine at http://mathmed.org. PMID:25204636

  18. Accurate prediction of interfacial residues in two-domain proteins using evolutionary information: implications for three-dimensional modeling.

    PubMed

    Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy

    2014-07-01

    With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions.

  19. Techniques and resources for storm-scale numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Droegemeier, Kelvin; Grell, Georg; Doyle, James; Soong, Su-Tzai; Skamarock, William; Bacon, David; Staniforth, Andrew; Crook, Andrew; Wilhelmson, Robert

    1993-01-01

    The topics discussed include the following: multiscale application of the 5th-generation PSU/NCAR mesoscale model, the coupling of nonhydrostatic atmospheric and hydrostatic ocean models for air-sea interaction studies; a numerical simulation of cloud formation over complex topography; adaptive grid simulations of convection; an unstructured grid, nonhydrostatic meso/cloud scale model; efficient mesoscale modeling for multiple scales using variable resolution; initialization of cloud-scale models with Doppler radar data; and making effective use of future computing architectures, networks, and visualization software.

  20. Simple numerical method for predicting steady compressible flows

    NASA Technical Reports Server (NTRS)

    Vonlavante, Ernst; Nelson, N. Duane

    1986-01-01

    A numerical method for solving the isenthalpic form of the governing equations for compressible viscous and inviscid flows was developed. The method was based on the concept of flux vector splitting in its implicit form. The method was tested on several demanding inviscid and viscous configurations. Two different forms of the implicit operator were investigated. The time marching to steady state was accelerated by the implementation of the multigrid procedure. Its various forms very effectively increased the rate of convergence of the present scheme. High quality steady state results were obtained in most of the test cases; these required only short computational times due to the relative efficiency of the basic method.

  1. Comparative motif discovery combined with comparative transcriptomics yields accurate targetome and enhancer predictions.

    PubMed

    Naval-Sánchez, Marina; Potier, Delphine; Haagen, Lotte; Sánchez, Máximo; Munck, Sebastian; Van de Sande, Bram; Casares, Fernando; Christiaens, Valerie; Aerts, Stein

    2013-01-01

    The identification of transcription factor binding sites, enhancers, and transcriptional target genes often relies on the integration of gene expression profiling and computational cis-regulatory sequence analysis. Methods for the prediction of cis-regulatory elements can take advantage of comparative genomics to increase signal-to-noise levels. However, gene expression data are usually derived from only one species. Here we investigate tissue-specific cross-species gene expression profiling by high-throughput sequencing, combined with cross-species motif discovery. First, we compared different methods for expression level quantification and cross-species integration using Tag-seq data. Using the optimal pipeline, we derived a set of genes with conserved expression during retinal determination across Drosophila melanogaster, Drosophila yakuba, and Drosophila virilis. These genes are enriched for binding sites of eye-related transcription factors including the zinc-finger Glass, a master regulator of photoreceptor differentiation. Validation of predicted Glass targets using RNA-seq in homozygous glass mutants confirms that the majority of our predictions are expressed downstream from Glass. Finally, we tested nine candidate enhancers by in vivo reporter assays and found eight of them to drive GFP in the eye disc, of which seven colocalize with the Glass protein, namely, scrt, chp, dpr10, CG6329, retn, Lim3, and dmrt99B. In conclusion, we show for the first time the combined use of cross-species expression profiling with cross-species motif discovery as a method to define a core developmental program, and we augment the candidate Glass targetome from a single known target gene, lozenge, to at least 62 conserved transcriptional targets. PMID:23070853

  2. Accurate and Rigorous Prediction of the Changes in Protein Free Energies in a Large-Scale Mutation Scan.

    PubMed

    Gapsys, Vytautas; Michielssens, Servaas; Seeliger, Daniel; de Groot, Bert L

    2016-06-20

    The prediction of mutation-induced free-energy changes in protein thermostability or protein-protein binding is of particular interest in the fields of protein design, biotechnology, and bioengineering. Herein, we achieve remarkable accuracy in a scan of 762 mutations estimating changes in protein thermostability based on the first principles of statistical mechanics. The remaining error in the free-energy estimates appears to be due to three sources in approximately equal parts, namely sampling, force-field inaccuracies, and experimental uncertainty. We propose a consensus force-field approach, which, together with an increased sampling time, leads to a free-energy prediction accuracy that matches those reached in experiments. This versatile approach enables accurate free-energy estimates for diverse proteins, including the prediction of changes in the melting temperature of the membrane protein neurotensin receptor 1. PMID:27122231

  3. Accurate prediction of cellular co-translational folding indicates proteins can switch from post- to co-translational folding

    NASA Astrophysics Data System (ADS)

    Nissley, Daniel A.; Sharma, Ajeet K.; Ahmed, Nabeel; Friedrich, Ulrike A.; Kramer, Günter; Bukau, Bernd; O'Brien, Edward P.

    2016-02-01

    The rates at which domains fold and codons are translated are important factors in determining whether a nascent protein will co-translationally fold and function or misfold and malfunction. Here we develop a chemical kinetic model that calculates a protein domain's co-translational folding curve during synthesis using only the domain's bulk folding and unfolding rates and codon translation rates. We show that this model accurately predicts the course of co-translational folding measured in vivo for four different protein molecules. We then make predictions for a number of different proteins in yeast and find that synonymous codon substitutions, which change translation-elongation rates, can switch some protein domains from folding post-translationally to folding co-translationally--a result consistent with previous experimental studies. Our approach explains essential features of co-translational folding curves and predicts how varying the translation rate at different codon positions along a transcript's coding sequence affects this self-assembly process.

  4. An integrated approach for non-periodic dynamic response prediction of complex structures: Numerical and experimental analysis

    NASA Astrophysics Data System (ADS)

    Rahneshin, Vahid; Chierichetti, Maria

    2016-09-01

    In this paper, a combined numerical and experimental method, called Extended Load Confluence Algorithm, is presented to accurately predict the dynamic response of non-periodic structures when little or no information about the applied loads is available. This approach, which falls into the category of Shape Sensing methods, inputs limited experimental information acquired from sensors to a mapping algorithm that predicts the response at unmeasured locations. The proposed algorithm consists of three major cores: an experimental core for data acquisition, a numerical core based on Finite Element Method for modeling the structure, and a mapping algorithm that improves the numerical model based on a modal approach in the frequency domain. The robustness and precision of the proposed algorithm are verified through numerical and experimental examples. The results of this paper demonstrate that without a precise knowledge of the loads acting on the structure, the dynamic behavior of the system can be predicted in an effective and precise manner after just a few iterations.

  5. PSI: A Comprehensive and Integrative Approach for Accurate Plant Subcellular Localization Prediction

    PubMed Central

    Chen, Ming

    2013-01-01

    Predicting the subcellular localization of proteins conquers the major drawbacks of high-throughput localization experiments that are costly and time-consuming. However, current subcellular localization predictors are limited in scope and accuracy. In particular, most predictors perform well on certain locations or with certain data sets while poorly on others. Here, we present PSI, a novel high accuracy web server for plant subcellular localization prediction. PSI derives the wisdom of multiple specialized predictors via a joint-approach of group decision making strategy and machine learning methods to give an integrated best result. The overall accuracy obtained (up to 93.4%) was higher than best individual (CELLO) by ∼10.7%. The precision of each predicable subcellular location (more than 80%) far exceeds that of the individual predictors. It can also deal with multi-localization proteins. PSI is expected to be a powerful tool in protein location engineering as well as in plant sciences, while the strategy employed could be applied to other integrative problems. A user-friendly web server, PSI, has been developed for free access at http://bis.zju.edu.cn/psi/. PMID:24194827

  6. CRYSpred: accurate sequence-based protein crystallization propensity prediction using sequence-derived structural characteristics.

    PubMed

    Mizianty, Marcin J; Kurgan, Lukasz A

    2012-01-01

    Relatively low success rates of X-ray crystallography, which is the most popular method for solving proteins structures, motivate development of novel methods that support selection of tractable protein targets. This aspect is particularly important in the context of the current structural genomics efforts that allow for a certain degree of flexibility in the target selection. We propose CRYSpred, a novel in-silico crystallization propensity predictor that uses a set of 15 novel features which utilize a broad range of inputs including charge, hydrophobicity, and amino acid composition derived from the protein chain, and the solvent accessibility and disorder predicted from the protein sequence. Our method outperforms seven modern crystallization propensity predictors on three, independent from training dataset, benchmark test datasets. The strong predictive performance offered by the CRYSpred is attributed to the careful design of the features, utilization of the comprehensive set of inputs, and the usage of the Support Vector Machine classifier. The inputs utilized by CRYSpred are well-aligned with the existing rules-of-thumb that are used in the structural genomics studies. PMID:21919861

  7. CRYSpred: accurate sequence-based protein crystallization propensity prediction using sequence-derived structural characteristics.

    PubMed

    Mizianty, Marcin J; Kurgan, Lukasz A

    2012-01-01

    Relatively low success rates of X-ray crystallography, which is the most popular method for solving proteins structures, motivate development of novel methods that support selection of tractable protein targets. This aspect is particularly important in the context of the current structural genomics efforts that allow for a certain degree of flexibility in the target selection. We propose CRYSpred, a novel in-silico crystallization propensity predictor that uses a set of 15 novel features which utilize a broad range of inputs including charge, hydrophobicity, and amino acid composition derived from the protein chain, and the solvent accessibility and disorder predicted from the protein sequence. Our method outperforms seven modern crystallization propensity predictors on three, independent from training dataset, benchmark test datasets. The strong predictive performance offered by the CRYSpred is attributed to the careful design of the features, utilization of the comprehensive set of inputs, and the usage of the Support Vector Machine classifier. The inputs utilized by CRYSpred are well-aligned with the existing rules-of-thumb that are used in the structural genomics studies.

  8. Numerical prediction of magnetising inrush current in transformers

    NASA Astrophysics Data System (ADS)

    Ling, P. C. Y.; Basak, A.

    1989-08-01

    A computational technique of prediction of magnetising inrush current at various switching conditions is described. An improved modelling of B/H curve of electrical steel is presented. The effects of varying switching angles on the voltage wave, the energising circuit impedance and the remanent flux density are discussed. The effects of other parameters, such as the winding space factor and energising winding length, which have not been previously taken into consideration are also presented in this paper.

  9. Size-extensivity-corrected multireference configuration interaction schemes to accurately predict bond dissociation energies of oxygenated hydrocarbons

    SciTech Connect

    Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.

    2014-01-28

    Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.

  10. Size-extensivity-corrected multireference configuration interaction schemes to accurately predict bond dissociation energies of oxygenated hydrocarbons

    NASA Astrophysics Data System (ADS)

    Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.

    2014-01-01

    Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.

  11. Numerical prediction of turbulent flows in dump diffusers

    NASA Astrophysics Data System (ADS)

    Ando, Yasunori; Kawai, Masafumi; Sato, Yukinori; Toh, Hidemi

    1986-12-01

    A finite-volume calculation method for the solution of two-dimensional incompressible time-averaged Navier-Stokes equation in a general curvilinear coordinate system is presented. The main calculation algorithm of the method is an extension of the SIMPLE algorithm to present the governing equations based on curvilinear coordinates, maintaining the Cartesian velocity components as dependent variables. The standard k-epsilon turbulence model is used for closure of the Reynolds equation. This method is applied to calculation of turbulent flows in two-dimensional and axisymmetrical dump diffusers with uniform and distorted inlet velocity profiles. General flow pattern and velocity distribution are successfully predicted and the results show good agreement with experimental data, especially when using the QUICKER scheme for convection terms approximation. Pressure loss can also be predicted using the fine grid, but improvement must be achieved in the computational method to be able to predict pressure loss with moderate grid. The treatment of the pressure correction equation in general curvilinear coordinates employed in this work enhances the stability and robustness of calculation.

  12. The Compensatory Reserve For Early and Accurate Prediction Of Hemodynamic Compromise: A Review of the Underlying Physiology.

    PubMed

    Convertino, Victor A; Wirt, Michael D; Glenn, John F; Lein, Brian C

    2016-06-01

    Shock is deadly and unpredictable if it is not recognized and treated in early stages of hemorrhage. Unfortunately, measurements of standard vital signs that are displayed on current medical monitors fail to provide accurate or early indicators of shock because of physiological mechanisms that effectively compensate for blood loss. As a result of new insights provided by the latest research on the physiology of shock using human experimental models of controlled hemorrhage, it is now recognized that measurement of the body's reserve to compensate for reduced circulating blood volume is the single most important indicator for early and accurate assessment of shock. We have called this function the "compensatory reserve," which can be accurately assessed by real-time measurements of changes in the features of the arterial waveform. In this paper, the physiology underlying the development and evaluation of a new noninvasive technology that allows for real-time measurement of the compensatory reserve will be reviewed, with its clinical implications for earlier and more accurate prediction of shock. PMID:26950588

  13. A novel method to predict visual field progression more accurately, using intraocular pressure measurements in glaucoma patients

    PubMed Central

    Asaoka, Ryo; Fujino, Yuri; Murata, Hiroshi; Miki, Atsuya; Tanito, Masaki; Mizoue, Shiro; Mori, Kazuhiko; Suzuki, Katsuyoshi; Yamashita, Takehiro; Kashiwagi, Kenji; Shoji, Nobuyuki

    2016-01-01

    Visual field (VF) data were retrospectively obtained from 491 eyes in 317 patients with open angle glaucoma who had undergone ten VF tests (Humphrey Field Analyzer, 24-2, SITA standard). First, mean of total deviation values (mTD) in the tenth VF was predicted using standard linear regression of the first five VFs (VF1-5) through to using all nine preceding VFs (VF1-9). Then an ‘intraocular pressure (IOP)-integrated VF trend analysis’ was carried out by simply using time multiplied by IOP as the independent term in the linear regression model. Prediction errors (absolute prediction error or root mean squared error: RMSE) for predicting mTD and also point wise TD values of the tenth VF were obtained from both approaches. The mTD absolute prediction errors associated with the IOP-integrated VF trend analysis were significantly smaller than those from the standard trend analysis when VF1-6 through to VF1-8 were used (p < 0.05). The point wise RMSEs from the IOP-integrated trend analysis were significantly smaller than those from the standard trend analysis when VF1-5 through to VF1-9 were used (p < 0.05). This was especially the case when IOP was measured more frequently. Thus a significantly more accurate prediction of VF progression is possible using a simple trend analysis that incorporates IOP measurements. PMID:27562553

  14. A novel method to predict visual field progression more accurately, using intraocular pressure measurements in glaucoma patients.

    PubMed

    2016-01-01

    Visual field (VF) data were retrospectively obtained from 491 eyes in 317 patients with open angle glaucoma who had undergone ten VF tests (Humphrey Field Analyzer, 24-2, SITA standard). First, mean of total deviation values (mTD) in the tenth VF was predicted using standard linear regression of the first five VFs (VF1-5) through to using all nine preceding VFs (VF1-9). Then an 'intraocular pressure (IOP)-integrated VF trend analysis' was carried out by simply using time multiplied by IOP as the independent term in the linear regression model. Prediction errors (absolute prediction error or root mean squared error: RMSE) for predicting mTD and also point wise TD values of the tenth VF were obtained from both approaches. The mTD absolute prediction errors associated with the IOP-integrated VF trend analysis were significantly smaller than those from the standard trend analysis when VF1-6 through to VF1-8 were used (p < 0.05). The point wise RMSEs from the IOP-integrated trend analysis were significantly smaller than those from the standard trend analysis when VF1-5 through to VF1-9 were used (p < 0.05). This was especially the case when IOP was measured more frequently. Thus a significantly more accurate prediction of VF progression is possible using a simple trend analysis that incorporates IOP measurements. PMID:27562553

  15. Combining multiple regression and principal component analysis for accurate predictions for column ozone in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Rajab, Jasim M.; MatJafri, M. Z.; Lim, H. S.

    2013-06-01

    This study encompasses columnar ozone modelling in the peninsular Malaysia. Data of eight atmospheric parameters [air surface temperature (AST), carbon monoxide (CO), methane (CH4), water vapour (H2Ovapour), skin surface temperature (SSKT), atmosphere temperature (AT), relative humidity (RH), and mean surface pressure (MSP)] data set, retrieved from NASA's Atmospheric Infrared Sounder (AIRS), for the entire period (2003-2008) was employed to develop models to predict the value of columnar ozone (O3) in study area. The combined method, which is based on using both multiple regressions combined with principal component analysis (PCA) modelling, was used to predict columnar ozone. This combined approach was utilized to improve the prediction accuracy of columnar ozone. Separate analysis was carried out for north east monsoon (NEM) and south west monsoon (SWM) seasons. The O3 was negatively correlated with CH4, H2Ovapour, RH, and MSP, whereas it was positively correlated with CO, AST, SSKT, and AT during both the NEM and SWM season periods. Multiple regression analysis was used to fit the columnar ozone data using the atmospheric parameter's variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to acquire subsets of the predictor variables to be comprised in the linear regression model of the atmospheric parameter's variables. It was found that the increase in columnar O3 value is associated with an increase in the values of AST, SSKT, AT, and CO and with a drop in the levels of CH4, H2Ovapour, RH, and MSP. The result of fitting the best models for the columnar O3 value using eight of the independent variables gave about the same values of the R (≈0.93) and R2 (≈0.86) for both the NEM and SWM seasons. The common variables that appeared in both regression equations were SSKT, CH4 and RH, and the principal precursor of the columnar O3 value in both the NEM and SWM seasons was SSKT.

  16. Can medical students accurately predict their learning? A study comparing perceived and actual performance in neuroanatomy.

    PubMed

    Hall, Samuel R; Stephens, Jonny R; Seaby, Eleanor G; Andrade, Matheus Gesteira; Lowry, Andrew F; Parton, Will J C; Smith, Claire F; Border, Scott

    2016-10-01

    It is important that clinicians are able to adequately assess their level of knowledge and competence in order to be safe practitioners of medicine. The medical literature contains numerous examples of poor self-assessment accuracy amongst medical students over a range of subjects however this ability in neuroanatomy has yet to be observed. Second year medical students attending neuroanatomy revision sessions at the University of Southampton and the competitors of the National Undergraduate Neuroanatomy Competition were asked to rate their level of knowledge in neuroanatomy. The responses from the former group were compared to performance on a ten item multiple choice question examination and the latter group were compared to their performance within the competition. In both cohorts, self-assessments of perceived level of knowledge correlated weakly to their performance in their respective objective knowledge assessments (r = 0.30 and r = 0.44). Within the NUNC, this correlation improved when students were instead asked to rate their performance on a specific examination within the competition (spotter, rS = 0.68; MCQ, rS = 0.58). Despite its inherent difficulty, medical student self-assessment accuracy in neuroanatomy is comparable to other subjects within the medical curriculum. Anat Sci Educ 9: 488-495. © 2016 American Association of Anatomists.

  17. Prognostic breast cancer signature identified from 3D culture model accurately predicts clinical outcome across independent datasets

    SciTech Connect

    Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.

    2008-10-20

    One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic

  18. Lift capability prediction for helicopter rotor blade-numerical evaluation

    NASA Astrophysics Data System (ADS)

    Rotaru, Constantin; Cîrciu, Ionicǎ; Luculescu, Doru

    2016-06-01

    The main objective of this paper is to describe the key physical features for modelling the unsteady aerodynamic effects found on helicopter rotor blade operating under nominally attached flow conditions away from stall. The unsteady effects were considered as phase differences between the forcing function and the aerodynamic response, being functions of the reduced frequency, the Mach number and the mode forcing. For a helicopter rotor, the reduced frequency at any blade element can't be exactly calculated but a first order approximation for the reduced frequency gives useful information about the degree of unsteadiness. The sources of unsteady effects were decomposed into perturbations to the local angle of attack and velocity field. The numerical calculus and graphics were made in FLUENT and MAPLE soft environments. This mathematical model is applicable for aerodynamic design of wind turbine rotor blades, hybrid energy systems optimization and aeroelastic analysis.

  19. NUMERICALLY PREDICTED INDIRECT SIGNATURES OF TERRESTRIAL PLANET FORMATION

    SciTech Connect

    Leinhardt, Zoë M.; Dobinson, Jack; Carter, Philip J.; Lines, Stefan

    2015-06-10

    The intermediate phases of planet formation are not directly observable due to lack of emission from planetesimals. Planet formation is, however, a dynamically active process resulting in collisions between the evolving planetesimals and the production of dust. Thus, indirect observation of planet formation may indeed be possible in the near future. In this paper we present synthetic observations based on numerical N-body simulations of the intermediate phase of planet formation including a state-of-the-art collision model, EDACM, which allows multiple collision outcomes, such as accretion, erosion, and bouncing events. We show that the formation of planetary embryos may be indirectly observable by a fully functioning ALMA telescope if the surface area involved in planetesimal evolution is sufficiently large and/or the amount of dust produced in the collisions is sufficiently high in mass.

  20. Predictive Lateral Logic for Numerical Entry Guidance Algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Kelly M.

    2016-01-01

    Recent entry guidance algorithm development123 has tended to focus on numerical integration of trajectories onboard in order to evaluate candidate bank profiles. Such methods enjoy benefits such as flexibility to varying mission profiles and improved robustness to large dispersions. A common element across many of these modern entry guidance algorithms is a reliance upon the concept of Apollo heritage lateral error (or azimuth error) deadbands in which the number of bank reversals to be performed is non-deterministic. This paper presents a closed-loop bank reversal method that operates with a fixed number of bank reversals defined prior to flight. However, this number of bank reversals can be modified at any point, including in flight, based on contingencies such as fuel leaks where propellant usage must be minimized.

  1. nuMap: a web platform for accurate prediction of nucleosome positioning.

    PubMed

    Alharbi, Bader A; Alshammari, Thamir H; Felton, Nathan L; Zhurkin, Victor B; Cui, Feng

    2014-10-01

    Nucleosome positioning is critical for gene expression and of major biological interest. The high cost of experimentally mapping nucleosomal arrangement signifies the need for computational approaches to predict nucleosome positions at high resolution. Here, we present a web-based application to fulfill this need by implementing two models, YR and W/S schemes, for the translational and rotational positioning of nucleosomes, respectively. Our methods are based on sequence-dependent anisotropic bending that dictates how DNA is wrapped around a histone octamer. This application allows users to specify a number of options such as schemes and parameters for threading calculation and provides multiple layout formats. The nuMap is implemented in Java/Perl/MySQL and is freely available for public use at http://numap.rit.edu. The user manual, implementation notes, description of the methodology and examples are available at the site. PMID:25220945

  2. A Foundation for the Accurate Prediction of the Soft Error Vulnerability of Scientific Applications

    SciTech Connect

    Bronevetsky, G; de Supinski, B; Schulz, M

    2009-02-13

    Understanding the soft error vulnerability of supercomputer applications is critical as these systems are using ever larger numbers of devices that have decreasing feature sizes and, thus, increasing frequency of soft errors. As many large scale parallel scientific applications use BLAS and LAPACK linear algebra routines, the soft error vulnerability of these methods constitutes a large fraction of the applications overall vulnerability. This paper analyzes the vulnerability of these routines to soft errors by characterizing how their outputs are affected by injected errors and by evaluating several techniques for predicting how errors propagate from the input to the output of each routine. The resulting error profiles can be used to understand the fault vulnerability of full applications that use these routines.

  3. Four-protein signature accurately predicts lymph node metastasis and survival in oral squamous cell carcinoma.

    PubMed

    Zanaruddin, Sharifah Nurain Syed; Saleh, Amyza; Yang, Yi-Hsin; Hamid, Sharifah; Mustafa, Wan Mahadzir Wan; Khairul Bariah, A A N; Zain, Rosnah Binti; Lau, Shin Hin; Cheong, Sok Ching

    2013-03-01

    The presence of lymph node (LN) metastasis significantly affects the survival of patients with oral squamous cell carcinoma (OSCC). Successful detection and removal of positive LNs are crucial in the treatment of this disease. Current evaluation methods still have their limitations in detecting the presence of tumor cells in the LNs, where up to a third of clinically diagnosed metastasis-negative (N0) patients actually have metastasis-positive LNs in the neck. We developed a molecular signature in the primary tumor that could predict LN metastasis in OSCC. A total of 211 cores from 55 individuals were included in the study. Eleven proteins were evaluated using immunohistochemical analysis in a tissue microarray. Of the 11 biomarkers evaluated using receiver operating curve analysis, epidermal growth factor receptor (EGFR), v-erb-b2 erythroblastic leukemia viral oncogene homolog 2 (HER-2/neu), laminin, gamma 2 (LAMC2), and ras homolog family member C (RHOC) were found to be significantly associated with the presence of LN metastasis. Unsupervised hierarchical clustering-demonstrated expression patterns of these 4 proteins could be used to differentiate specimens that have positive LN metastasis from those that are negative for LN metastasis. Collectively, EGFR, HER-2/neu, LAMC2, and RHOC have a specificity of 87.5% and a sensitivity of 70%, with a prognostic accuracy of 83.4% for LN metastasis. We also demonstrated that the LN signature could independently predict disease-specific survival (P = .036). The 4-protein LN signature validated in an independent set of samples strongly suggests that it could reliably distinguish patients with LN metastasis from those who were metastasis-free and therefore could be a prognostic tool for the management of patients with OSCC.

  4. Four-protein signature accurately predicts lymph node metastasis and survival in oral squamous cell carcinoma.

    PubMed

    Zanaruddin, Sharifah Nurain Syed; Saleh, Amyza; Yang, Yi-Hsin; Hamid, Sharifah; Mustafa, Wan Mahadzir Wan; Khairul Bariah, A A N; Zain, Rosnah Binti; Lau, Shin Hin; Cheong, Sok Ching

    2013-03-01

    The presence of lymph node (LN) metastasis significantly affects the survival of patients with oral squamous cell carcinoma (OSCC). Successful detection and removal of positive LNs are crucial in the treatment of this disease. Current evaluation methods still have their limitations in detecting the presence of tumor cells in the LNs, where up to a third of clinically diagnosed metastasis-negative (N0) patients actually have metastasis-positive LNs in the neck. We developed a molecular signature in the primary tumor that could predict LN metastasis in OSCC. A total of 211 cores from 55 individuals were included in the study. Eleven proteins were evaluated using immunohistochemical analysis in a tissue microarray. Of the 11 biomarkers evaluated using receiver operating curve analysis, epidermal growth factor receptor (EGFR), v-erb-b2 erythroblastic leukemia viral oncogene homolog 2 (HER-2/neu), laminin, gamma 2 (LAMC2), and ras homolog family member C (RHOC) were found to be significantly associated with the presence of LN metastasis. Unsupervised hierarchical clustering-demonstrated expression patterns of these 4 proteins could be used to differentiate specimens that have positive LN metastasis from those that are negative for LN metastasis. Collectively, EGFR, HER-2/neu, LAMC2, and RHOC have a specificity of 87.5% and a sensitivity of 70%, with a prognostic accuracy of 83.4% for LN metastasis. We also demonstrated that the LN signature could independently predict disease-specific survival (P = .036). The 4-protein LN signature validated in an independent set of samples strongly suggests that it could reliably distinguish patients with LN metastasis from those who were metastasis-free and therefore could be a prognostic tool for the management of patients with OSCC. PMID:23026198

  5. Nonempirically Tuned Range-Separated DFT Accurately Predicts Both Fundamental and Excitation Gaps in DNA and RNA Nucleobases

    PubMed Central

    2012-01-01

    Using a nonempirically tuned range-separated DFT approach, we study both the quasiparticle properties (HOMO–LUMO fundamental gaps) and excitation energies of DNA and RNA nucleobases (adenine, thymine, cytosine, guanine, and uracil). Our calculations demonstrate that a physically motivated, first-principles tuned DFT approach accurately reproduces results from both experimental benchmarks and more computationally intensive techniques such as many-body GW theory. Furthermore, in the same set of nucleobases, we show that the nonempirical range-separated procedure also leads to significantly improved results for excitation energies compared to conventional DFT methods. The present results emphasize the importance of a nonempirically tuned range-separation approach for accurately predicting both fundamental and excitation gaps in DNA and RNA nucleobases. PMID:22904693

  6. Numerical simulation of a twin screw expander for performance prediction

    NASA Astrophysics Data System (ADS)

    Papes, Iva; Degroote, Joris; Vierendeels, Jan

    2015-08-01

    With the increasing use of twin screw expanders in waste heat recovery applications, the performance prediction of these machines plays an important role. This paper presents a mathematical model for calculating the performance of a twin screw expander. From the mass and energy conservation laws, differential equations are derived which are then solved together with the appropriate Equation of State in the instantaneous control volumes. Different flow processes that occur inside the screw expander such as filling (accompanied by a substantial pressure loss) and leakage flows through the clearances are accounted for in the model. The mathematical model employs all geometrical parameters such as chamber volume, suction and leakage areas. With R245fa as working fluid, the Aungier Redlich-Kwong Equation of State has been used in order to include real gas effects. To calculate the mass flow rates through the leakage paths formed inside the screw expander, flow coefficients are considered as constant and they are derived from 3D Computational Fluid Dynamic calculations at given working conditions and applied to all other working conditions. The outcome of the mathematical model is the P-V indicator diagram which is compared to CFD results of the same twin screw expander. Since CFD calculations require significant computational time, developed mathematical model can be used for the faster performance prediction.

  7. Numerical Simulation of Bolide Entry with Ground Footprint Prediction

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian; Mathias, Donovan L.; Berger, Marsha J.

    2016-01-01

    As they decelerate through the atmosphere, meteors deposit mass, momentum and energy into the surrounding air at tremendous rates. Trauma from the entry of such bolides produces strong blast waves that can propagate hundreds of kilometers and cause substantial terrestrial damage even when no ground impact occurs. We present a new simulation technique for airburst blast prediction using a fully-conservative, Cartesian mesh, finite-volume solver and investigate the ability of this method to model far- field propagation over hundreds of kilometers. The work develops mathematical models for the deposition of mass, momentum and energy into the atmosphere and presents verification and validation through canonical problems and the comparison of surface overpressures, and blast arrival times with actual results in the literature for known bolides. The discussion also examines the effects of various approximations to the physics of bolide entry that can substantially decrease the computational expense of these simulations. We present parametric studies to quantify the influence of entry-angle, burst-height and other parameters on the ground footprint of the airburst, and these values are related to predictions from analytic and handbook-methods.

  8. Lateral impact validation of a geometrically accurate full body finite element model for blunt injury prediction.

    PubMed

    Vavalle, Nicholas A; Moreno, Daniel P; Rhyne, Ashley C; Stitzel, Joel D; Gayzik, F Scott

    2013-03-01

    This study presents four validation cases of a mid-sized male (M50) full human body finite element model-two lateral sled tests at 6.7 m/s, one sled test at 8.9 m/s, and a lateral drop test. Model results were compared to transient force curves, peak force, chest compression, and number of fractures from the studies. For one of the 6.7 m/s impacts (flat wall impact), the peak thoracic, abdominal and pelvic loads were 8.7, 3.1 and 14.9 kN for the model and 5.2 ± 1.1 kN, 3.1 ± 1.1 kN, and 6.3 ± 2.3 kN for the tests. For the same test setup in the 8.9 m/s case, they were 12.6, 6, and 21.9 kN for the model and 9.1 ± 1.5 kN, 4.9 ± 1.1 kN, and 17.4 ± 6.8 kN for the experiments. The combined torso load and the pelvis load simulated in a second rigid wall impact at 6.7 m/s were 11.4 and 15.6 kN, respectively, compared to 8.5 ± 0.2 kN and 8.3 ± 1.8 kN experimentally. The peak thorax load in the drop test was 6.7 kN for the model, within the range in the cadavers, 5.8-7.4 kN. When analyzing rib fractures, the model predicted Abbreviated Injury Scale scores within the reported range in three of four cases. Objective comparison methods were used to quantitatively compare the model results to the literature studies. The results show a good match in the thorax and abdomen regions while the pelvis results over predicted the reaction loads from the literature studies. These results are an important milestone in the development and validation of this globally developed average male FEA model in lateral impact.

  9. Accurate prediction of the refractive index of polymers using first principles and data modeling

    NASA Astrophysics Data System (ADS)

    Afzal, Mohammad Atif Faiz; Cheng, Chong; Hachmann, Johannes

    Organic polymers with a high refractive index (RI) have recently attracted considerable interest due to their potential application in optical and optoelectronic devices. The ability to tailor the molecular structure of polymers is the key to increasing the accessible RI values. Our work concerns the creation of predictive in silico models for the optical properties of organic polymers, the screening of large-scale candidate libraries, and the mining of the resulting data to extract the underlying design principles that govern their performance. This work was set up to guide our experimentalist partners and allow them to target the most promising candidates. Our model is based on the Lorentz-Lorenz equation and thus includes the polarizability and number density values for each candidate. For the former, we performed a detailed benchmark study of different density functionals, basis sets, and the extrapolation scheme towards the polymer limit. For the number density we devised an exceedingly efficient machine learning approach to correlate the polymer structure and the packing fraction in the bulk material. We validated the proposed RI model against the experimentally known RI values of 112 polymers. We could show that the proposed combination of physical and data modeling is both successful and highly economical to characterize a wide range of organic polymers, which is a prerequisite for virtual high-throughput screening.

  10. Accurate predictions of C-SO2R bond dissociation enthalpies using density functional theory methods.

    PubMed

    Yu, Hai-Zhu; Fu, Fang; Zhang, Liang; Fu, Yao; Dang, Zhi-Min; Shi, Jing

    2014-10-14

    The dissociation of the C-SO2R bond is frequently involved in organic and bio-organic reactions, and the C-SO2R bond dissociation enthalpies (BDEs) are potentially important for understanding the related mechanisms. The primary goal of the present study is to provide a reliable calculation method to predict the different C-SO2R bond dissociation enthalpies (BDEs). Comparing the accuracies of 13 different density functional theory (DFT) methods (such as B3LYP, TPSS, and M05 etc.), and different basis sets (such as 6-31G(d) and 6-311++G(2df,2p)), we found that M06-2X/6-31G(d) gives the best performance in reproducing the various C-S BDEs (and especially the C-SO2R BDEs). As an example for understanding the mechanisms with the aid of C-SO2R BDEs, some primary mechanistic studies were carried out on the chemoselective coupling (in the presence of a Cu-catalyst) or desulfinative coupling reactions (in the presence of a Pd-catalyst) between sulfinic acid salts and boryl/sulfinic acid salts.

  11. Towards Accurate Prediction of Turbulent, Three-Dimensional, Recirculating Flows with the NCC

    NASA Technical Reports Server (NTRS)

    Iannetti, A.; Tacina, R.; Jeng, S.-M.; Cai, J.

    2001-01-01

    The National Combustion Code (NCC) was used to calculate the steady state, nonreacting flow field of a prototype Lean Direct Injection (LDI) swirler. This configuration used nine groups of eight holes drilled at a thirty-five degree angle to induce swirl. These nine groups created swirl in the same direction, or a corotating pattern. The static pressure drop across the holes was fixed at approximately four percent. Computations were performed on one quarter of the geometry, because the geometry is considered rotationally periodic every ninety degrees. The final computational grid used was approximately 2.26 million tetrahedral cells, and a cubic nonlinear k - epsilon model was used to model turbulence. The NCC results were then compared to time averaged Laser Doppler Velocimetry (LDV) data. The LDV measurements were performed on the full geometry, but four ninths of the geometry was measured. One-, two-, and three-dimensional representations of both flow fields are presented. The NCC computations compare both qualitatively and quantitatively well to the LDV data, but differences exist downstream. The comparison is encouraging, and shows that NCC can be used for future injector design studies. To improve the flow prediction accuracy of turbulent, three-dimensional, recirculating flow fields with the NCC, recommendations are given.

  12. An improved method for accurate prediction of mass flows through combustor liner holes

    SciTech Connect

    Adkins, R.C.; Gueroui, D.

    1986-01-01

    The objective of this paper is to present a simple approach to the solution of flow through combustor liner holes which can be used by practicing combustor engineers as well as providing the specialist modeler with a convenient boundary condition. For modeling, suppose that all relevant details of the incoming jets can be readily predicted, then the computational boundary can be limited to the inner wall of the liner and to the jets themselves. The scope of this paper is limited to the derivation of a simple analysis, the development of a reliable test technique, and to the correlation of data for plane holes having a diameter which is large when compared to the liner wall thickness. The effect of internal liner flow on the performance of the holes is neglected; this is considered to be justifiable because the analysis terminates at a short distance downstream of the hole and the significantly lower velocities inside the combustor have had little opportunity to have taken any effect. It is intended to extend the procedure to more complex hole forms and flow configurations in later papers.

  13. Neural network approach to quantum-chemistry data: Accurate prediction of density functional theory energies

    NASA Astrophysics Data System (ADS)

    Balabin, Roman M.; Lomakina, Ekaterina I.

    2009-08-01

    Artificial neural network (ANN) approach has been applied to estimate the density functional theory (DFT) energy with large basis set using lower-level energy values and molecular descriptors. A total of 208 different molecules were used for the ANN training, cross validation, and testing by applying BLYP, B3LYP, and BMK density functionals. Hartree-Fock results were reported for comparison. Furthermore, constitutional molecular descriptor (CD) and quantum-chemical molecular descriptor (QD) were used for building the calibration model. The neural network structure optimization, leading to four to five hidden neurons, was also carried out. The usage of several low-level energy values was found to greatly reduce the prediction error. An expected error, mean absolute deviation, for ANN approximation to DFT energies was 0.6±0.2 kcal mol-1. In addition, the comparison of the different density functionals with the basis sets and the comparison of multiple linear regression results were also provided. The CDs were found to overcome limitation of the QD. Furthermore, the effective ANN model for DFT/6-311G(3df,3pd) and DFT/6-311G(2df,2pd) energy estimation was developed, and the benchmark results were provided.

  14. Line Shape Parameters for CO_2 Transitions: Accurate Predictions from Complex Robert-Bonamy Calculations

    NASA Astrophysics Data System (ADS)

    Lamouroux, Julien; Gamache, Robert R.

    2013-06-01

    A model for the prediction of the vibrational dependence of CO_2 half-widths and line shifts for several broadeners, based on a modification of the model proposed by Gamache and Hartmann, is presented. This model allows the half-widths and line shifts for a ro-vibrational transition to be expressed in terms of the number of vibrational quanta exchanged in the transition raised to a power p and a reference ro-vibrational transition. Complex Robert-Bonamy calculations were made for 24 bands for lower rotational quantum numbers J'' from 0 to 160 for N_2-, O_2-, air-, and self-collisions with CO_2. In the model a Quantum Coordinate is defined by (c_1 Δν_1 + c_2 Δν_2 + c_3 Δν_3)^p where a linear least-squares fit to the data by the model expression is made. The model allows the determination of the slope and intercept as a function of rotational transition, broadening gas, and temperature. From these fit data, the half-width, line shift, and the temperature dependence of the half-width can be estimated for any ro-vibrational transition, allowing spectroscopic CO_2 databases to have complete information for the line shape parameters. R. R. Gamache, J.-M. Hartmann, J. Quant. Spectrosc. Radiat. Transfer. {{83}} (2004), 119. R. R. Gamache, J. Lamouroux, J. Quant. Spectrosc. Radiat. Transfer. {{117}} (2013), 93.

  15. Numerical parameter constraints for accurate PIC-DSMC simulation of breakdown from arc initiation to stable arcs

    NASA Astrophysics Data System (ADS)

    Moore, Christopher; Hopkins, Matthew; Moore, Stan; Boerner, Jeremiah; Cartwright, Keith

    2015-09-01

    Simulation of breakdown is important for understanding and designing a variety of applications such as mitigating undesirable discharge events. Such simulations need to be accurate through early time arc initiation to late time stable arc behavior. Here we examine constraints on the timestep and mesh size required for arc simulations using the particle-in-cell (PIC) method with direct simulation Monte Carlo (DMSC) collisions. Accurate simulation of electron avalanche across a fixed voltage drop and constant neutral density (reduced field of 1000 Td) was found to require a timestep ~ 1/100 of the mean time between collisions and a mesh size ~ 1/25 the mean free path. These constraints are much smaller than the typical PIC-DSMC requirements for timestep and mesh size. Both constraints are related to the fact that charged particles are accelerated by the external field. Thus gradients in the electron energy distribution function can exist at scales smaller than the mean free path and these must be resolved by the mesh size for accurate collision rates. Additionally, the timestep must be small enough that the particle energy change due to the fields be small in order to capture gradients in the cross sections versus energy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  16. Predicting polarization performance of high-numerical aperture inspection lenses

    NASA Astrophysics Data System (ADS)

    Fahr, Stephan; Werschnik, Jan; Bening, Matthias; Uhlendorf, Kristina

    2015-09-01

    Along the course of increasing through-put and improving signal to noise ratio in optical wafer and mask inspection, demands on wave front aberrations and polarization characteristics are ever increasing. The system engineers and optical designers involved in the development of such optical systems will be responsible for specifying the quality of the optical material and the mechanical tolerances. Among optical designers it is well established how to estimate the wave front error of assembled and adjusted optical devices via sensitivity or Monte-Carlo analysis. However, when compared with the scalar problem of wave front estimation, the field of polarization control deems to pose a more complex problem due to its vectorial nature. Here we show our latest results in how to model polarization affecting aspects. In the realm of high numerical aperture (NA) inspection optics we will focus on the impact of coatings, stress induced birefringence due to non-perfect lens mounting, and finally the birefringence of the optical material. With all these tools at hand, we have a more complete understanding of the optical performance of our assembled optical systems. Moreover, we are able to coherently develop optical systems meeting demanding wave front criteria as well as high end polarization specifications.

  17. Analytical and numerical predictions of dendritic grain envelopes

    SciTech Connect

    Gandin, C.A.; Rappaz, M.; Schaefer, R.J.

    1996-08-01

    An analytical model is developed for the prediction of the shape of dendritic grain envelopes during solidification of a metallic alloy in a Bridgman configuration (i.e., constant thermal gradient and cooling rate). The assumptions built into the model allow a direct comparison of the results with those obtained from a previously developed cellular automation-finite element (CAFE) model. After this comparison, the CAFE model is applied to the study of the extension of a single grain into an open region of liquid after passing a re-entrant corner. The simulation results are compared with experimental observations made on a directionally solidified succinonitrile-acetone alloy. Good agreement is found for the shape of the grain envelopes when varying the orientation of the primary dendrites with respect to the thermal gradient direction, the velocity of the isotherms or the thermal gradient.

  18. Numerical predictions of natural convection in a uniformly heated pool

    SciTech Connect

    Tzanos, C.P. Cho, D.H.

    1993-05-01

    In the event of a core meltdown accident, one of the accident progression paths is fuel relocation to the lower reactor plenum. In the heavy water new production reactor (NPR-HWR) design the reactor cavity is flooded with water. In such a design, decay heat removal to the water in the reactor cavity and thence to the containment may be adequate to keep the reactor vessel temperature below failure limits. If this is the case, the accident progression can be arrested by retaining a coolable corium configuration in the lower reactor plenum. The strategy of reactor cavity flooding to prevent reactor vessel failure from molten corium relocation to the reactor vessel lower head has also been considered for commercial pressurized water reactors. Previously, the computer code COMMIX-LAR/P was used to determine if the heat removal rate from the molten cerium in the lower plenum to the water in the cavity was adequate to keep the reactor vessel temperature in the NPR-HWR design below failure limits. It was found that natural convection in the molten pool resulted in heat removal rates that kept the peak reactor vessel temperature about 400{degrees}C below the steel melting point. The objective of the work presented in this paper was to determine whether COMMIX adequately predicts natural convection in a pool heated by a uniform heat source. For this purpose, the experiments of free convection in a semicircular cavity of Jahn and Reeneke were analyzed with COMMIX and code predictions were compared with experimental measurements. COMMIX is a general purpose thermalhydraulics code based on finite differencing by the first order upwind scheme.

  19. Numerical predictions of natural convection in a uniformly heated pool

    SciTech Connect

    Tzanos, C.P. Cho, D.H.

    1993-01-01

    In the event of a core meltdown accident, one of the accident progression paths is fuel relocation to the lower reactor plenum. In the heavy water new production reactor (NPR-HWR) design the reactor cavity is flooded with water. In such a design, decay heat removal to the water in the reactor cavity and thence to the containment may be adequate to keep the reactor vessel temperature below failure limits. If this is the case, the accident progression can be arrested by retaining a coolable corium configuration in the lower reactor plenum. The strategy of reactor cavity flooding to prevent reactor vessel failure from molten corium relocation to the reactor vessel lower head has also been considered for commercial pressurized water reactors. Previously, the computer code COMMIX-LAR/P was used to determine if the heat removal rate from the molten cerium in the lower plenum to the water in the cavity was adequate to keep the reactor vessel temperature in the NPR-HWR design below failure limits. It was found that natural convection in the molten pool resulted in heat removal rates that kept the peak reactor vessel temperature about 400[degrees]C below the steel melting point. The objective of the work presented in this paper was to determine whether COMMIX adequately predicts natural convection in a pool heated by a uniform heat source. For this purpose, the experiments of free convection in a semicircular cavity of Jahn and Reeneke were analyzed with COMMIX and code predictions were compared with experimental measurements. COMMIX is a general purpose thermalhydraulics code based on finite differencing by the first order upwind scheme.

  20. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    PubMed Central

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-01-01

    attributed to phantom setup errors due to the slightly deformable and flexible phantom extremities. The estimated site-specific safety buffer distance with 0.001% probability of collision for (gantry-to-couch, gantry-to-phantom) was (1.23 cm, 3.35 cm), (1.01 cm, 3.99 cm), and (2.19 cm, 5.73 cm) for treatment to the head, lung, and prostate, respectively. Automated delivery to all three treatment sites was completed in 15 min and collision free using a digital Linac. Conclusions: An individualized collision prediction model for the purpose of noncoplanar beam delivery was developed and verified. With the model, the study has demonstrated the feasibility of predicting deliverable beams for an individual patient and then guiding fully automated noncoplanar treatment delivery. This work motivates development of clinical workflows and quality assurance procedures to allow more extensive use and automation of noncoplanar beam geometries. PMID:26520735

  1. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    SciTech Connect

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-11-15

    attributed to phantom setup errors due to the slightly deformable and flexible phantom extremities. The estimated site-specific safety buffer distance with 0.001% probability of collision for (gantry-to-couch, gantry-to-phantom) was (1.23 cm, 3.35 cm), (1.01 cm, 3.99 cm), and (2.19 cm, 5.73 cm) for treatment to the head, lung, and prostate, respectively. Automated delivery to all three treatment sites was completed in 15 min and collision free using a digital Linac. Conclusions: An individualized collision prediction model for the purpose of noncoplanar beam delivery was developed and verified. With the model, the study has demonstrated the feasibility of predicting deliverable beams for an individual patient and then guiding fully automated noncoplanar treatment delivery. This work motivates development of clinical workflows and quality assurance procedures to allow more extensive use and automation of noncoplanar beam geometries.

  2. How Accurate Are the Anthropometry Equations in in Iranian Military Men in Predicting Body Composition?

    PubMed Central

    Shakibaee, Abolfazl; Faghihzadeh, Soghrat; Alishiri, Gholam Hossein; Ebrahimpour, Zeynab; Faradjzadeh, Shahram; Sobhani, Vahid; Asgari, Alireza

    2015-01-01

    Background: The body composition varies according to different life styles (i.e. intake calories and caloric expenditure). Therefore, it is wise to record military personnel’s body composition periodically and encourage those who abide to the regulations. Different methods have been introduced for body composition assessment: invasive and non-invasive. Amongst them, the Jackson and Pollock equation is most popular. Objectives: The recommended anthropometric prediction equations for assessing men’s body composition were compared with dual-energy X-ray absorptiometry (DEXA) gold standard to develop a modified equation to assess body composition and obesity quantitatively among Iranian military men. Patients and Methods: A total of 101 military men aged 23 - 52 years old with a mean age of 35.5 years were recruited and evaluated in the present study (average height, 173.9 cm and weight, 81.5 kg). The body-fat percentages of subjects were assessed both with anthropometric assessment and DEXA scan. The data obtained from these two methods were then compared using multiple regression analysis. Results: The mean and standard deviation of body fat percentage of the DEXA assessment was 21.2 ± 4.3 and body fat percentage obtained from three Jackson and Pollock 3-, 4- and 7-site equations were 21.1 ± 5.8, 22.2 ± 6.0 and 20.9 ± 5.7, respectively. There was a strong correlation between these three equations and DEXA (R² = 0.98). Conclusions: The mean percentage of body fat obtained from the three equations of Jackson and Pollock was very close to that of body fat obtained from DEXA; however, we suggest using a modified Jackson-Pollock 3-site equation for volunteer military men because the 3-site equation analysis method is simpler and faster than other methods. PMID:26715964

  3. Advancing Satellite-Based Flood Prediction in Complex Terrain Using High-Resolution Numerical Weather Prediction

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Anagnostou, E. N.; Nikolopoulos, E. I.; Bartsotas, N. S.

    2015-12-01

    Floods constitute one of the most significant and frequent natural hazard in mountainous regions. Satellite-based precipitation products offer in many cases the only available source of QPE. However, satellite-based QPE over complex terrain suffer from significant bias that limits their applicability for hydrologic modeling. In this work we investigate the potential of a new correction procedure, which involves the use of high-resolution numerical weather prediction (NWP) model simulations to adjust satellite QPE. Adjustment is based on the pdf matching of satellite and NWP (used as reference) precipitation distribution. The impact of correction procedure on simulating the hydrologic response is examined for 15 storm events that generated floods over the mountainous Upper Adige region of Northern Italy. Atmospheric simulations were performed at 1-km resolution from a state-of-the-art atmospheric model (RAMS/ICLAMS). The proposed error correction procedure was then applied on the widely used TRMM 3B42 satellite precipitation product and the evaluation of the correction was based on independent in situ precipitation measurements from a dense rain gauge network (1 gauge / 70 km2) available in the study area. Satellite QPE, before and after correction, are used to simulate flood response using ARFFS (Adige River Flood Forecasting System), a semi-distributed hydrologic model, which is used for operational flood forecasting in the region. Results showed that bias in satellite QPE before correction was significant and had a tremendous impact on the simulation of flood peak, however the correction procedure was able to reduce bias in QPE and therefore improve considerably the simulated flood hydrograph.

  4. An Accurate Method for Prediction of Protein-Ligand Binding Site on Protein Surface Using SVM and Statistical Depth Function

    PubMed Central

    Wang, Kui; Gao, Jianzhao; Shen, Shiyi; Tuszynski, Jack A.; Ruan, Jishou

    2013-01-01

    Since proteins carry out their functions through interactions with other molecules, accurately identifying the protein-ligand binding site plays an important role in protein functional annotation and rational drug discovery. In the past two decades, a lot of algorithms were present to predict the protein-ligand binding site. In this paper, we introduce statistical depth function to define negative samples and propose an SVM-based method which integrates sequence and structural information to predict binding site. The results show that the present method performs better than the existent ones. The accuracy, sensitivity, and specificity on training set are 77.55%, 56.15%, and 87.96%, respectively; on the independent test set, the accuracy, sensitivity, and specificity are 80.36%, 53.53%, and 92.38%, respectively. PMID:24195070

  5. Deformation, Failure, and Fatigue Life of SiC/Ti-15-3 Laminates Accurately Predicted by MAC/GMC

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2002-01-01

    NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) (ref.1) has been extended to enable fully coupled macro-micro deformation, failure, and fatigue life predictions for advanced metal matrix, ceramic matrix, and polymer matrix composites. Because of the multiaxial nature of the code's underlying micromechanics model, GMC--which allows the incorporation of complex local inelastic constitutive models--MAC/GMC finds its most important application in metal matrix composites, like the SiC/Ti-15-3 composite examined here. Furthermore, since GMC predicts the microscale fields within each constituent of the composite material, submodels for local effects such as fiber breakage, interfacial debonding, and matrix fatigue damage can and have been built into MAC/GMC. The present application of MAC/GMC highlights the combination of these features, which has enabled the accurate modeling of the deformation, failure, and life of titanium matrix composites.

  6. Use of medium-range numerical weather prediction model output to produce forecasts of streamflow

    USGS Publications Warehouse

    Clark, M.P.; Hay, L.E.

    2004-01-01

    This paper examines an archive containing over 40 years of 8-day atmospheric forecasts over the contiguous United States from the NCEP reanalysis project to assess the possibilities for using medium-range numerical weather prediction model output for predictions of streamflow. This analysis shows the biases in the NCEP forecasts to be quite extreme. In many regions, systematic precipitation biases exceed 100% of the mean, with temperature biases exceeding 3??C. In some locations, biases are even higher. The accuracy of NCEP precipitation and 2-m maximum temperature forecasts is computed by interpolating the NCEP model output for each forecast day to the location of each station in the NWS cooperative network and computing the correlation with station observations. Results show that the accuracy of the NCEP forecasts is rather low in many areas of the country. Most apparent is the generally low skill in precipitation forecasts (particularly in July) and low skill in temperature forecasts in the western United States, the eastern seaboard, and the southern tier of states. These results outline a clear need for additional processing of the NCEP Medium-Range Forecast Model (MRF) output before it is used for hydrologic predictions. Techniques of model output statistics (MOS) are used in this paper to downscale the NCEP forecasts to station locations. Forecasted atmospheric variables (e.g., total column precipitable water, 2-m air temperature) are used as predictors in a forward screening multiple linear regression model to improve forecasts of precipitation and temperature for stations in the National Weather Service cooperative network. This procedure effectively removes all systematic biases in the raw NCEP precipitation and temperature forecasts. MOS guidance also results in substantial improvements in the accuracy of maximum and minimum temperature forecasts throughout the country. For precipitation, forecast improvements were less impressive. MOS guidance increases

  7. Climate simulation and numerical weather prediction using GPUs

    NASA Astrophysics Data System (ADS)

    Lapillonne, Xavier; Fuhrer, Oliver; Ruedisuehli, Stefan; Arteaga, Andrea; Osuna, Carlos; Walser, Andre; Leuenberger, Daniel

    2015-04-01

    After the successful development of a prototype GPU version of the atmospheric model COSMO, the COSMO Consortium has decided to bring these developments back to the official version in order to have an operational GPU-capable model for climate and weather prediction. The implementation is designed so as to avoid costly data transfer between the GPU and the CPU and achieve best performance. To this end, most parts of model are ported to GPU. Furthermore, the implementation has been specifically targeted for hardware architectures with fat nodes (nodes with multiple GPUs), which is very favourable in terms of minimizing the energy-to-solution metric. The dynamical core has been completely rewritten using a GPU-enabled domain-specific language. The rest of the model namely the physical parametrizations and the data assimilation are ported to GPU using the OpenACC compiler directives. In this contribution, we present the overall porting strategy as well as new features available on GPU in the latest version of the model in particular concerning the data assimilation. Performance and verification results obtained on several hybrid Cray systems are presented and compared against the current operational model version used at MeteoSwiss.

  8. Identifying state-dependent model error in numerical weather prediction

    NASA Astrophysics Data System (ADS)

    Moskaitis, J.; Hansen, J.; Toth, Z.; Zhu, Y.

    2003-04-01

    Model forecasts of complex systems such as the atmosphere lose predictive skill because of two different sources of error: initial conditions error and model error. While much study has been done to determine the nature and consequences of initial conditions error in operational forecast models, relatively little has been done to identify the source of model error and to quantify the effects of model error on forecasts. Here, we attempt to "disentangle" model error from initial conditions error by applying a diagnostic tool in a simple model framework to identify poor forecasts for which model error is likely responsible. The diagnostic is based on the premise that for a perfect ensemble forecast, verification should fall outside the range of ensemble forecast states only a small percentage of the time, according to the size of the ensemble. Identifying these outlier verifications and comparing the statistics of their occurrence to those of a perfect ensemble can tell us about the role of model error in a quantitative, state-dependent manner. The same diagnostic is applied to operational NWP models to quantify the role of model error in poor forecasts (see companion paper by Toth et al.). From these results, we can infer the atmospheric processes the model cannot adequately simulate.

  9. Wind field near complex terrain using numerical weather prediction model

    NASA Astrophysics Data System (ADS)

    Chim, Kin-Sang

    results by Miles (1969) and Smith (1980, 1985), and the numerical results of Stein (1992), Miranda and James (1992) and Olaffson and Bougeault (1997). It is found that the simulated result in the present study is comparable with others. The fifth part is the construction of the regime diagram for the Lantau island of Hong Kong. All eight major wind directions are discussed.

  10. TURBULENT LINEWIDTHS IN PROTOPLANETARY DISKS: PREDICTIONS FROM NUMERICAL SIMULATIONS

    SciTech Connect

    Simon, Jacob B.; Beckwith, Kris; Armitage, Philip J.

    2011-12-10

    Submillimeter observations of protoplanetary disks now approach the acuity needed to measure the turbulent broadening of molecular lines. These measurements constrain disk angular momentum transport, and furnish evidence of the turbulent environment within which planetesimal formation takes place. We use local magnetohydrodynamic (MHD) simulations of the magnetorotational instability (MRI) to predict the distribution of turbulent velocities in low-mass protoplanetary disks, as a function of radius and height above the mid-plane. We model both ideal MHD disks and disks in which Ohmic dissipation results in a dead zone of suppressed turbulence near the mid-plane. Under ideal conditions, the disk mid-plane is characterized by a velocity distribution that peaks near v {approx_equal} 0.1c{sub s} (where c{sub s} is the local sound speed), while supersonic velocities are reached at z > 3H (where H is the vertical pressure scale height). Residual velocities of v Almost-Equal-To 10{sup -2} c{sub s} persist near the mid-plane in dead zones, while the surface layers remain active. Anisotropic variation of the linewidth with disk inclination is modest. We compare our MHD results to hydrodynamic simulations in which large-scale forcing is used to initiate similar turbulent velocities. We show that the qualitative trend of increasing v with height, seen in the MHD case, persists for forced turbulence and is likely a generic property of disk turbulence. Percentage level determinations of v at different heights within the disk, or spatially resolved observations that probe the inner disk containing the dead zone region, are therefore needed to test whether the MRI is responsible for protoplanetary disk turbulence.

  11. A cross-race effect in metamemory: Predictions of face recognition are more accurate for members of our own race.

    PubMed

    Hourihan, Kathleen L; Benjamin, Aaron S; Liu, Xiping

    2012-09-01

    The Cross-Race Effect (CRE) in face recognition is the well-replicated finding that people are better at recognizing faces from their own race, relative to other races. The CRE reveals systematic limitations on eyewitness identification accuracy and suggests that some caution is warranted in evaluating cross-race identification. The CRE is a problem because jurors value eyewitness identification highly in verdict decisions. In the present paper, we explore how accurate people are in predicting their ability to recognize own-race and other-race faces. Caucasian and Asian participants viewed photographs of Caucasian and Asian faces, and made immediate judgments of learning during study. An old/new recognition test replicated the CRE: both groups displayed superior discriminability of own-race faces, relative to other-race faces. Importantly, relative metamnemonic accuracy was also greater for own-race faces, indicating that the accuracy of predictions about face recognition is influenced by race. This result indicates another source of concern when eliciting or evaluating eyewitness identification: people are less accurate in judging whether they will or will not recognize a face when that face is of a different race than they are. This new result suggests that a witness's claim of being likely to recognize a suspect from a lineup should be interpreted with caution when the suspect is of a different race than the witness.

  12. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed.

  13. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. PMID:26121186

  14. Why don't we learn to accurately forecast feelings? How misremembering our predictions blinds us to past forecasting errors.

    PubMed

    Meyvis, Tom; Ratner, Rebecca K; Levav, Jonathan

    2010-11-01

    Why do affective forecasting errors persist in the face of repeated disconfirming evidence? Five studies demonstrate that people misremember their forecasts as consistent with their experience and thus fail to perceive the extent of their forecasting error. As a result, people do not learn from past forecasting errors and fail to adjust subsequent forecasts. In the context of a Super Bowl loss (Study 1), a presidential election (Studies 2 and 3), an important purchase (Study 4), and the consumption of candies (Study 5), individuals mispredicted their affective reactions to these experiences and subsequently misremembered their predictions as more accurate than they actually had been. The findings indicate that this recall error results from people's tendency to anchor on their current affective state when trying to recall their affective forecasts. Further, those who showed larger recall errors were less likely to learn to adjust their subsequent forecasts and reminding people of their actual forecasts enhanced learning. These results suggest that a failure to accurately recall one's past predictions contributes to the perpetuation of forecasting errors.

  15. Further Developments in Range-Extended GPS Kinematic Positioning Using a Numerical Weather Prediction Model

    NASA Astrophysics Data System (ADS)

    Nievinski, F. G.; Santos, M.

    2006-05-01

    We have been investigating the prediction of radio propagation delays due to neutral atmosphere via ray-tracing in Numerical Weather Prediction Models (NWP), aiming at improving kinematic positioning on medium- distance baselines. In this article we describe the developments in our ray-tracer since our latest publication (Nievinski et al., 2005). In our previous work we indicated the need to further investigate the transformation from line-of-sight distance to geopotential height, because we suspected it could be introducing biases at the centimetre level in the predicted delays. We tested seven different formulas. To validate that transformation to the vertical coordinate, we compared NWP-interpolated pressure values against pressure values measured at North American stations. We came up with two formulas that give better results than the one we have used before, one of which is both more accurate and faster than the previous one. Using this new formula we were able to reduce the bias in pressure to the milimetre level (converting from pressure to hydrostatic delay, for easier interpretation). To complete the validation of the transformation to the NWP coordinate space, we investigated the horizontal coordinates as well. We did so comparing the shorelines inferred from the NWP ground geopotential height field against the ones taken from a high-resolution vector database. We found unexpected discrepancies at the kilometer level (in a 15 to 20 km resolution model), due to different interpretations about the earth models used by the NWP-producing agency. Those discrepancies are critical on coastal and high slope areas, where the horizontal gradients for the weather parameters (e.g., for pressure) are especially high. From these two validations, we conclude that we should prefer to be consistent with the formulas used in the generation of the NWP, instead of using arguably more rigorous ones (from a geodetic point of view). In the past we have analyzed only short (1

  16. Accurate prediction of cellular co-translational folding indicates proteins can switch from post- to co-translational folding

    PubMed Central

    Nissley, Daniel A.; Sharma, Ajeet K.; Ahmed, Nabeel; Friedrich, Ulrike A.; Kramer, Günter; Bukau, Bernd; O'Brien, Edward P.

    2016-01-01

    The rates at which domains fold and codons are translated are important factors in determining whether a nascent protein will co-translationally fold and function or misfold and malfunction. Here we develop a chemical kinetic model that calculates a protein domain's co-translational folding curve during synthesis using only the domain's bulk folding and unfolding rates and codon translation rates. We show that this model accurately predicts the course of co-translational folding measured in vivo for four different protein molecules. We then make predictions for a number of different proteins in yeast and find that synonymous codon substitutions, which change translation-elongation rates, can switch some protein domains from folding post-translationally to folding co-translationally—a result consistent with previous experimental studies. Our approach explains essential features of co-translational folding curves and predicts how varying the translation rate at different codon positions along a transcript's coding sequence affects this self-assembly process. PMID:26887592

  17. Accurate prediction of cellular co-translational folding indicates proteins can switch from post- to co-translational folding.

    PubMed

    Nissley, Daniel A; Sharma, Ajeet K; Ahmed, Nabeel; Friedrich, Ulrike A; Kramer, Günter; Bukau, Bernd; O'Brien, Edward P

    2016-01-01

    The rates at which domains fold and codons are translated are important factors in determining whether a nascent protein will co-translationally fold and function or misfold and malfunction. Here we develop a chemical kinetic model that calculates a protein domain's co-translational folding curve during synthesis using only the domain's bulk folding and unfolding rates and codon translation rates. We show that this model accurately predicts the course of co-translational folding measured in vivo for four different protein molecules. We then make predictions for a number of different proteins in yeast and find that synonymous codon substitutions, which change translation-elongation rates, can switch some protein domains from folding post-translationally to folding co-translationally--a result consistent with previous experimental studies. Our approach explains essential features of co-translational folding curves and predicts how varying the translation rate at different codon positions along a transcript's coding sequence affects this self-assembly process. PMID:26887592

  18. A simple yet accurate correction for winner's curse can predict signals discovered in much larger genome scans

    PubMed Central

    Bigdeli, T. Bernard; Lee, Donghyung; Webb, Bradley Todd; Riley, Brien P.; Vladimirov, Vladimir I.; Fanous, Ayman H.; Kendler, Kenneth S.; Bacanu, Silviu-Alin

    2016-01-01

    Motivation: For genetic studies, statistically significant variants explain far less trait variance than ‘sub-threshold’ association signals. To dimension follow-up studies, researchers need to accurately estimate ‘true’ effect sizes at each SNP, e.g. the true mean of odds ratios (ORs)/regression coefficients (RRs) or Z-score noncentralities. Naïve estimates of effect sizes incur winner’s curse biases, which are reduced only by laborious winner’s curse adjustments (WCAs). Given that Z-scores estimates can be theoretically translated on other scales, we propose a simple method to compute WCA for Z-scores, i.e. their true means/noncentralities. Results:WCA of Z-scores shrinks these towards zero while, on P-value scale, multiple testing adjustment (MTA) shrinks P-values toward one, which corresponds to the zero Z-score value. Thus, WCA on Z-scores scale is a proxy for MTA on P-value scale. Therefore, to estimate Z-score noncentralities for all SNPs in genome scans, we propose FDR Inverse Quantile Transformation (FIQT). It (i) performs the simpler MTA of P-values using FDR and (ii) obtains noncentralities by back-transforming MTA P-values on Z-score scale. When compared to competitors, realistic simulations suggest that FIQT is more (i) accurate and (ii) computationally efficient by orders of magnitude. Practical application of FIQT to Psychiatric Genetic Consortium schizophrenia cohort predicts a non-trivial fraction of sub-threshold signals which become significant in much larger supersamples. Conclusions: FIQT is a simple, yet accurate, WCA method for Z-scores (and ORs/RRs, via simple transformations). Availability and Implementation: A 10 lines R function implementation is available at https://github.com/bacanusa/FIQT. Contact: sabacanu@vcu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27187203

  19. Small-scale field experiments accurately scale up to predict density dependence in reef fish populations at large scales.

    PubMed

    Steele, Mark A; Forrester, Graham E

    2005-09-20

    Field experiments provide rigorous tests of ecological hypotheses but are usually limited to small spatial scales. It is thus unclear whether these findings extrapolate to larger scales relevant to conservation and management. We show that the results of experiments detecting density-dependent mortality of reef fish on small habitat patches scale up to have similar effects on much larger entire reefs that are the size of small marine reserves and approach the scale at which some reef fisheries operate. We suggest that accurate scaling is due to the type of species interaction causing local density dependence and the fact that localized events can be aggregated to describe larger-scale interactions with minimal distortion. Careful extrapolation from small-scale experiments identifying species interactions and their effects should improve our ability to predict the outcomes of alternative management strategies for coral reef fishes and their habitats.

  20. TIMP2•IGFBP7 biomarker panel accurately predicts acute kidney injury in high-risk surgical patients

    PubMed Central

    Gunnerson, Kyle J.; Shaw, Andrew D.; Chawla, Lakhmir S.; Bihorac, Azra; Al-Khafaji, Ali; Kashani, Kianoush; Lissauer, Matthew; Shi, Jing; Walker, Michael G.; Kellum, John A.

    2016-01-01

    BACKGROUND Acute kidney injury (AKI) is an important complication in surgical patients. Existing biomarkers and clinical prediction models underestimate the risk for developing AKI. We recently reported data from two trials of 728 and 408 critically ill adult patients in whom urinary TIMP2•IGFBP7 (NephroCheck, Astute Medical) was used to identify patients at risk of developing AKI. Here we report a preplanned analysis of surgical patients from both trials to assess whether urinary tissue inhibitor of metalloproteinase 2 (TIMP-2) and insulin-like growth factor–binding protein 7 (IGFBP7) accurately identify surgical patients at risk of developing AKI. STUDY DESIGN We enrolled adult surgical patients at risk for AKI who were admitted to one of 39 intensive care units across Europe and North America. The primary end point was moderate-severe AKI (equivalent to KDIGO [Kidney Disease Improving Global Outcomes] stages 2–3) within 12 hours of enrollment. Biomarker performance was assessed using the area under the receiver operating characteristic curve, integrated discrimination improvement, and category-free net reclassification improvement. RESULTS A total of 375 patients were included in the final analysis of whom 35 (9%) developed moderate-severe AKI within 12 hours. The area under the receiver operating characteristic curve for [TIMP-2]•[IGFBP7] alone was 0.84 (95% confidence interval, 0.76–0.90; p < 0.0001). Biomarker performance was robust in sensitivity analysis across predefined subgroups (urgency and type of surgery). CONCLUSION For postoperative surgical intensive care unit patients, a single urinary TIMP2•IGFBP7 test accurately identified patients at risk for developing AKI within the ensuing 12 hours and its inclusion in clinical risk prediction models significantly enhances their performance. LEVEL OF EVIDENCE Prognostic study, level I. PMID:26816218

  1. A novel fibrosis index comprising a non-cholesterol sterol accurately predicts HCV-related liver cirrhosis.

    PubMed

    Ydreborg, Magdalena; Lisovskaja, Vera; Lagging, Martin; Brehm Christensen, Peer; Langeland, Nina; Buhl, Mads Rauning; Pedersen, Court; Mørch, Kristine; Wejstål, Rune; Norkrans, Gunnar; Lindh, Magnus; Färkkilä, Martti; Westin, Johan

    2014-01-01

    Diagnosis of liver cirrhosis is essential in the management of chronic hepatitis C virus (HCV) infection. Liver biopsy is invasive and thus entails a risk of complications as well as a potential risk of sampling error. Therefore, non-invasive diagnostic tools are preferential. The aim of the present study was to create a model for accurate prediction of liver cirrhosis based on patient characteristics and biomarkers of liver fibrosis, including a panel of non-cholesterol sterols reflecting cholesterol synthesis and absorption and secretion. We evaluated variables with potential predictive significance for liver fibrosis in 278 patients originally included in a multicenter phase III treatment trial for chronic HCV infection. A stepwise multivariate logistic model selection was performed with liver cirrhosis, defined as Ishak fibrosis stage 5-6, as the outcome variable. A new index, referred to as Nordic Liver Index (NoLI) in the paper, was based on the model: Log-odds (predicting cirrhosis) = -12.17+ (age × 0.11) + (BMI (kg/m(2)) × 0.23) + (D7-lathosterol (μg/100 mg cholesterol)×(-0.013)) + (Platelet count (x10(9)/L) × (-0.018)) + (Prothrombin-INR × 3.69). The area under the ROC curve (AUROC) for prediction of cirrhosis was 0.91 (95% CI 0.86-0.96). The index was validated in a separate cohort of 83 patients and the AUROC for this cohort was similar (0.90; 95% CI: 0.82-0.98). In conclusion, the new index may complement other methods in diagnosing cirrhosis in patients with chronic HCV infection.

  2. PredictSNP2: A Unified Platform for Accurately Evaluating SNP Effects by Exploiting the Different Characteristics of Variants in Distinct Genomic Regions.

    PubMed

    Bendl, Jaroslav; Musil, Miloš; Štourač, Jan; Zendulka, Jaroslav; Damborský, Jiří; Brezovský, Jan

    2016-05-01

    An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools' predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations. To

  3. PredictSNP2: A Unified Platform for Accurately Evaluating SNP Effects by Exploiting the Different Characteristics of Variants in Distinct Genomic Regions

    PubMed Central

    Brezovský, Jan

    2016-01-01

    An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools’ predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations

  4. Evaluating the use of high-resolution numerical weather forecast for debris flow prediction.

    NASA Astrophysics Data System (ADS)

    Nikolopoulos, Efthymios I.; Bartsotas, Nikolaos S.; Borga, Marco; Kallos, George

    2015-04-01

    The sudden occurrence combined with the high destructive power of debris flows pose a significant threat to human life and infrastructures. Therefore, developing early warning procedures for the mitigation of debris flows risk is of great economical and societal importance. Given that rainfall is the predominant factor controlling debris flow triggering, it is indisputable that development of effective debris flows warning procedures requires accurate knowledge of the properties (e.g. duration, intensity) of the triggering rainfall. Moreover, efficient and timely response of emergency operations depends highly on the lead-time provided by the warning systems. Currently, the majority of early warning systems for debris flows are based on nowcasting procedures. While the latter may be successful in predicting the hazard, they provide warnings with a relatively short lead-time (~6h). Increasing the lead-time is necessary in order to improve the pre-incident operations and communication of the emergency, thus coupling warning systems with weather forecasting is essential for advancing early warning procedures. In this work we evaluate the potential of using high-resolution (1km) rainfall fields forecasted with a state-of-the-art numerical weather prediction model (RAMS/ICLAMS), in order to predict the occurrence of debris flows. Analysis is focused over the Upper Adige region, Northeast Italy, an area where debris flows are frequent. Seven storm events that generated a large number (>80) of debris flows during the period 2007-2012 are analyzed. Radar-based rainfall estimates, available from the operational C-band radar located at Mt Macaion, are used as the reference to evaluate the forecasted rainfall fields. Evaluation is mainly focused on assessing the error in forecasted rainfall properties (magnitude, duration) and the correlation in space and time with the reference field. Results show that the forecasted rainfall fields captured very well the magnitude and

  5. First principles predictions of intrinsic defects in aluminum arsenide, AlAs : numerical supplement.

    SciTech Connect

    Schultz, Peter Andrew

    2012-04-01

    This Report presents numerical tables summarizing properties of intrinsic defects in aluminum arsenide, AlAs, as computed by density functional theory. This Report serves as a numerical supplement to the results published in: P.A. Schultz, 'First principles predictions of intrinsic defects in Aluminum Arsenide, AlAs', Materials Research Society Symposia Proceedings 1370 (2011; SAND2011-2436C), and intended for use as reference tables for a defect physics package in device models.

  6. Real-time zenith tropospheric delays in support of numerical weather prediction applications

    NASA Astrophysics Data System (ADS)

    Dousa, Jan; Vaclavovic, Pavel

    2014-05-01

    The Geodetic Observatory Pecný (GOP) routinely estimates near real-time zenith total delays (ZTD) from GPS permanent stations for assimilation in numerical weather prediction (NWP) models more than 12 years. Besides European regional, global and GPS and GLONASS solutions, we have recently developed real-time estimates aimed at supporting NWP nowcasting or severe weather event monitoring. While all previous solutions are based on data batch processing in a network mode, the real-time solution exploits real-time global orbits and clocks from the International GNSS Service (IGS) and Precise Point Positioning (PPP) processing strategy. New application G-Nut/Tefnut has been developed and real-time ZTDs have been continuously processed in the nine-month demonstration campaign (February-October, 2013) for selected 36 European and global stations. Resulting ZTDs can be characterized by mean standard deviations of 6-10 mm, but still remaining large biases up to 20 mm due to missing precise models in the software. These results fulfilled threshold requirements for the operational NWP nowcasting (i.e. 30 mm in ZTD). Since remaining ZTD biases can be effectively eliminated using the bias-reduction procedure prior to the assimilation, results are approaching the target requirements in terms of relative accuracy (i.e. 6 mm in ZTD). Real-time strategy and software are under the development and we foresee further improvements in reducing biases and in optimizing the accuracy within required timeliness. The real-time products from the International GNSS Service were found accurate and stable for supporting PPP-based tropospheric estimates for the NWP nowcasting.

  7. Sensitivity analysis of numerical weather prediction radiative schemes to forecast direct solar radiation over Australia

    NASA Astrophysics Data System (ADS)

    Mukkavilli, S. K.; Kay, M. J.; Taylor, R.; Prasad, A. A.; Troccoli, A.

    2014-12-01

    The Australian Solar Energy Forecasting System (ASEFS) project requires forecasting timeframes which range from nowcasting to long-term forecasts (minutes to two years). As concentrating solar power (CSP) plant operators are one of the key stakeholders in the national energy market, research and development enhancements for direct normal irradiance (DNI) forecasts is a major subtask. This project involves comparing different radiative scheme codes to improve day ahead DNI forecasts on the national supercomputing infrastructure running mesoscale simulations on NOAA's Weather Research & Forecast (WRF) model. ASEFS also requires aerosol data fusion for improving accurate representation of spatio-temporally variable atmospheric aerosols to reduce DNI bias error in clear sky conditions over southern Queensland & New South Wales where solar power is vulnerable to uncertainities from frequent aerosol radiative events such as bush fires and desert dust. Initial results from thirteen years of Bureau of Meteorology's (BOM) deseasonalised DNI and MODIS NASA-Terra aerosol optical depth (AOD) anomalies demonstrated strong negative correlations in north and southeast Australia along with strong variability in AOD (~0.03-0.05). Radiative transfer schemes, DNI and AOD anomaly correlations will be discussed for the population and transmission grid centric regions where current and planned CSP plants dispatch electricity to capture peak prices in the market. Aerosol and solar irradiance datasets include satellite and ground based assimilations from the national BOM, regional aerosol researchers and agencies. The presentation will provide an overview of this ASEFS project task on WRF and results to date. The overall goal of this ASEFS subtask is to develop a hybrid numerical weather prediction (NWP) and statistical/machine learning multi-model ensemble strategy that meets future operational requirements of CSP plant operators.

  8. Accurate electrical prediction of memory array through SEM-based edge-contour extraction using SPICE simulation

    NASA Astrophysics Data System (ADS)

    Shauly, Eitan; Rotstein, Israel; Peltinov, Ram; Latinski, Sergei; Adan, Ofer; Levi, Shimon; Menadeva, Ovadya

    2009-03-01

    The continues transistors scaling efforts, for smaller devices, similar (or larger) drive current/um and faster devices, increase the challenge to predict and to control the transistor off-state current. Typically, electrical simulators like SPICE, are using the design intent (as-drawn GDS data). At more sophisticated cases, the simulators are fed with the pattern after lithography and etch process simulations. As the importance of electrical simulation accuracy is increasing and leakage is becoming more dominant, there is a need to feed these simulators, with more accurate information extracted from physical on-silicon transistors. Our methodology to predict changes in device performances due to systematic lithography and etch effects was used in this paper. In general, the methodology consists on using the OPCCmaxTM for systematic Edge-Contour-Extraction (ECE) from transistors, taking along the manufacturing and includes any image distortions like line-end shortening, corner rounding and line-edge roughness. These measurements are used for SPICE modeling. Possible application of this new metrology is to provide a-head of time, physical and electrical statistical data improving time to market. In this work, we applied our methodology to analyze a small and large array's of 2.14um2 6T-SRAM, manufactured using Tower Standard Logic for General Purposes Platform. 4 out of the 6 transistors used "U-Shape AA", known to have higher variability. The predicted electrical performances of the transistors drive current and leakage current, in terms of nominal values and variability are presented. We also used the methodology to analyze an entire SRAM Block array. Study of an isolation leakage and variability are presented.

  9. A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals

    NASA Technical Reports Server (NTRS)

    Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.

    1994-01-01

    Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.

  10. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  11. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  12. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.

  13. Computational finite element bone mechanics accurately predicts mechanical competence in the human radius of an elderly population.

    PubMed

    Mueller, Thomas L; Christen, David; Sandercott, Steve; Boyd, Steven K; van Rietbergen, Bert; Eckstein, Felix; Lochmüller, Eva-Maria; Müller, Ralph; van Lenthe, G Harry

    2011-06-01

    High-resolution peripheral quantitative computed tomography (HR-pQCT) is clinically available today and provides a non-invasive measure of 3D bone geometry and micro-architecture with unprecedented detail. In combination with microarchitectural finite element (μFE) models it can be used to determine bone strength using a strain-based failure criterion. Yet, images from only a relatively small part of the radius are acquired and it is not known whether the region recommended for clinical measurements does predict forearm fracture load best. Furthermore, it is questionable whether the currently used failure criterion is optimal because of improvements in image resolution, changes in the clinically measured volume of interest, and because the failure criterion depends on the amount of bone present. Hence, we hypothesized that bone strength estimates would improve by measuring a region closer to the subchondral plate, and by defining a failure criterion that would be independent of the measured volume of interest. To answer our hypotheses, 20% of the distal forearm length from 100 cadaveric but intact human forearms was measured using HR-pQCT. μFE bone strength was analyzed for different subvolumes, as well as for the entire 20% of the distal radius length. Specifically, failure criteria were developed that provided accurate estimates of bone strength as assessed experimentally. It was shown that distal volumes were better in predicting bone strength than more proximal ones. Clinically speaking, this would argue to move the volume of interest for the HR-pQCT measurements even more distally than currently recommended by the manufacturer. Furthermore, new parameter settings using the strain-based failure criterion are presented providing better accuracy for bone strength estimates.

  14. Numerical prediction of three-dimensional juncture region flow using the parabolic Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Manhardt, P. D.; Orzechowski, J. A.

    1979-01-01

    A numerical solution algorithm is established for prediction of subsonic turbulent three-dimensional flows in aerodynamic configuration juncture regions. A turbulence closure model is established using the complete Reynolds stress. Pressure coupling is accomplished using the concepts of complementary and particular solutions to a Poisson equation. Specifications for data input juncture geometry modification are presented.

  15. A Support Vector Machine model for the prediction of proteotypic peptides for accurate mass and time proteomics

    SciTech Connect

    Webb-Robertson, Bobbie-Jo M.; Cannon, William R.; Oehmen, Christopher S.; Shah, Anuj R.; Gurumoorthi, Vidhya; Lipton, Mary S.; Waters, Katrina M.

    2008-07-01

    Motivation: The standard approach to identifying peptides based on accurate mass and elution time (AMT) compares these profiles obtained from a high resolution mass spectrometer to a database of peptides previously identified from tandem mass spectrometry (MS/MS) studies. It would be advantageous, with respect to both accuracy and cost, to only search for those peptides that are detectable by MS (proteotypic). Results: We present a Support Vector Machine (SVM) model that uses a simple descriptor space based on 35 properties of amino acid content, charge, hydrophilicity, and polarity for the quantitative prediction of proteotypic peptides. Using three independently derived AMT databases (Shewanella oneidensis, Salmonella typhimurium, Yersinia pestis) for training and validation within and across species, the SVM resulted in an average accuracy measure of ~0.8 with a standard deviation of less than 0.025. Furthermore, we demonstrate that these results are achievable with a small set of 12 variables and can achieve high proteome coverage. Availability: http://omics.pnl.gov/software/STEPP.php

  16. Analytical and numerical models to predict the behavior of unbonded flexible risers under torsion

    NASA Astrophysics Data System (ADS)

    Ren, Shao-fei; Xue, Hong-xiang; Tang, Wen-yong

    2016-04-01

    This paper presents analytical and numerical models to predict the behavior of unbonded flexible risers under torsion. The analytical model takes local bending and torsion of tensile armor wires into consideration, and equilibrium equations of forces and displacements of layers are deduced. The numerical model includes lay angle, cross-sectional profiles of carcass, pressure armor layer and contact between layers. Abaqus/Explicit quasi-static simulation and mass scaling are adopted to avoid convergence problem and excessive computation time caused by geometric and contact nonlinearities. Results show that local bending and torsion of helical strips may have great influence on torsional stiffness, but stress related to bending and torsion is negligible; the presentation of anti-friction tapes may have great influence both on torsional stiffness and stress; hysteresis of torsion-twist relationship under cyclic loading is obtained by numerical model, which cannot be predicted by analytical model because of the ignorance of friction between layers.

  17. Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance prediction accuracy-

    NASA Astrophysics Data System (ADS)

    Ko, P.; Kurosawa, S.

    2014-03-01

    The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.

  18. High IFIT1 expression predicts improved clinical outcome, and IFIT1 along with MGMT more accurately predicts prognosis in newly diagnosed glioblastoma.

    PubMed

    Zhang, Jin-Feng; Chen, Yao; Lin, Guo-Shi; Zhang, Jian-Dong; Tang, Wen-Long; Huang, Jian-Huang; Chen, Jin-Shou; Wang, Xing-Fu; Lin, Zhi-Xiong

    2016-06-01

    Interferon-induced protein with tetratricopeptide repeat 1 (IFIT1) plays a key role in growth suppression and apoptosis promotion in cancer cells. Interferon was reported to induce the expression of IFIT1 and inhibit the expression of O-6-methylguanine-DNA methyltransferase (MGMT).This study aimed to investigate the expression of IFIT1, the correlation between IFIT1 and MGMT, and their impact on the clinical outcome in newly diagnosed glioblastoma. The expression of IFIT1 and MGMT and their correlation were investigated in the tumor tissues from 70 patients with newly diagnosed glioblastoma. The effects on progression-free survival and overall survival were evaluated. Of 70 cases, 57 (81.4%) tissue samples showed high expression of IFIT1 by immunostaining. The χ(2) test indicated that the expression of IFIT1 and MGMT was negatively correlated (r = -0.288, P = .016). Univariate and multivariate analyses confirmed high IFIT1 expression as a favorable prognostic indicator for progression-free survival (P = .005 and .017) and overall survival (P = .001 and .001), respectively. Patients with 2 favorable factors (high IFIT1 and low MGMT) had an improved prognosis as compared with others. The results demonstrated significantly increased expression of IFIT1 in newly diagnosed glioblastoma tissue. The negative correlation between IFIT1 and MGMT expression may be triggered by interferon. High IFIT1 can be a predictive biomarker of favorable clinical outcome, and IFIT1 along with MGMT more accurately predicts prognosis in newly diagnosed glioblastoma. PMID:26980050

  19. Numerical simulation of flow in a high head Francis turbine with prediction of efficiency, rotor stator interaction and vortex structures in the draft tube

    NASA Astrophysics Data System (ADS)

    Jošt, D.; Škerlavaj, A.; Morgut, M.; Mežnar, P.; Nobile, E.

    2015-01-01

    The paper presents numerical simulations of flow in a model of a high head Francis turbine and comparison of results to the measurements. Numerical simulations were done by two CFD (Computational Fluid Dynamics) codes, Ansys CFX and OpenFOAM. Steady-state simulations were performed by k-epsilon and SST model, while for transient simulations the SAS SST ZLES model was used. With proper grid refinement in distributor and runner and with taking into account losses in labyrinth seals very accurate prediction of torque on the shaft, head and efficiency was obtained. Calculated axial and circumferential velocity components on two planes in the draft tube matched well with experimental results.

  20. A New Objective Technique for Verifying Mesoscale Numerical Weather Prediction Models

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Manobianco, John; Lane, John E.; Immer, Christopher D.

    2003-01-01

    This report presents a new objective technique to verify predictions of the sea-breeze phenomenon over east-central Florida by the Regional Atmospheric Modeling System (RAMS) mesoscale numerical weather prediction (NWP) model. The Contour Error Map (CEM) technique identifies sea-breeze transition times in objectively-analyzed grids of observed and forecast wind, verifies the forecast sea-breeze transition times against the observed times, and computes the mean post-sea breeze wind direction and speed to compare the observed and forecast winds behind the sea-breeze front. The CEM technique is superior to traditional objective verification techniques and previously-used subjective verification methodologies because: It is automated, requiring little manual intervention, It accounts for both spatial and temporal scales and variations, It accurately identifies and verifies the sea-breeze transition times, and It provides verification contour maps and simple statistical parameters for easy interpretation. The CEM uses a parallel lowpass boxcar filter and a high-order bandpass filter to identify the sea-breeze transition times in the observed and model grid points. Once the transition times are identified, CEM fits a Gaussian histogram function to the actual histogram of transition time differences between the model and observations. The fitted parameters of the Gaussian function subsequently explain the timing bias and variance of the timing differences across the valid comparison domain. Once the transition times are all identified at each grid point, the CEM computes the mean wind direction and speed during the remainder of the day for all times and grid points after the sea-breeze transition time. The CEM technique performed quite well when compared to independent meteorological assessments of the sea-breeze transition times and results from a previously published subjective evaluation. The algorithm correctly identified a forecast or observed sea-breeze occurrence

  1. Toward Relatively General and Accurate Quantum Chemical Predictions of Solid-State 17O NMR Chemical Shifts in Various Biologically Relevant Oxygen-containing Compounds

    PubMed Central

    Rorick, Amber; Michael, Matthew A.; Yang, Liu; Zhang, Yong

    2015-01-01

    Oxygen is an important element in most biologically significant molecules and experimental solid-state 17O NMR studies have provided numerous useful structural probes to study these systems. However, computational predictions of solid-state 17O NMR chemical shift tensor properties are still challenging in many cases and in particular each of the prior computational work is basically limited to one type of oxygen-containing systems. This work provides the first systematic study of the effects of geometry refinement, method and basis sets for metal and non-metal elements in both geometry optimization and NMR property calculations of some biologically relevant oxygen-containing compounds with a good variety of XO bonding groups, X= H, C, N, P, and metal. The experimental range studied is of 1455 ppm, a major part of the reported 17O NMR chemical shifts in organic and organometallic compounds. A number of computational factors towards relatively general and accurate predictions of 17O NMR chemical shifts were studied to provide helpful and detailed suggestions for future work. For the studied various kinds of oxygen-containing compounds, the best computational approach results in a theory-versus-experiment correlation coefficient R2 of 0.9880 and mean absolute deviation of 13 ppm (1.9% of the experimental range) for isotropic NMR shifts and R2 of 0.9926 for all shift tensor properties. These results shall facilitate future computational studies of 17O NMR chemical shifts in many biologically relevant systems, and the high accuracy may also help refinement and determination of active-site structures of some oxygen-containing substrate bound proteins. PMID:26274812

  2. Toward Relatively General and Accurate Quantum Chemical Predictions of Solid-State (17)O NMR Chemical Shifts in Various Biologically Relevant Oxygen-Containing Compounds.

    PubMed

    Rorick, Amber; Michael, Matthew A; Yang, Liu; Zhang, Yong

    2015-09-01

    Oxygen is an important element in most biologically significant molecules, and experimental solid-state (17)O NMR studies have provided numerous useful structural probes to study these systems. However, computational predictions of solid-state (17)O NMR chemical shift tensor properties are still challenging in many cases, and in particular, each of the prior computational works is basically limited to one type of oxygen-containing system. This work provides the first systematic study of the effects of geometry refinement, method, and basis sets for metal and nonmetal elements in both geometry optimization and NMR property calculations of some biologically relevant oxygen-containing compounds with a good variety of XO bonding groups (X = H, C, N, P, and metal). The experimental range studied is of 1455 ppm, a major part of the reported (17)O NMR chemical shifts in organic and organometallic compounds. A number of computational factors toward relatively general and accurate predictions of (17)O NMR chemical shifts were studied to provide helpful and detailed suggestions for future work. For the studied kinds of oxygen-containing compounds, the best computational approach results in a theory-versus-experiment correlation coefficient (R(2)) value of 0.9880 and a mean absolute deviation of 13 ppm (1.9% of the experimental range) for isotropic NMR shifts and an R(2) value of 0.9926 for all shift-tensor properties. These results shall facilitate future computational studies of (17)O NMR chemical shifts in many biologically relevant systems, and the high accuracy may also help the refinement and determination of active-site structures of some oxygen-containing substrate-bound proteins.

  3. Numerical predictions for planets in the debris discs of HD 202628 and HD 207129

    NASA Astrophysics Data System (ADS)

    Thilliez, E.; Maddison, S. T.

    2016-04-01

    Resolved debris disc images can exhibit a range of radial and azimuthal structures, including gaps and rings, which can result from planetary companions shaping the disc by their gravitational influence. Currently, there are no tools available to determine the architecture of potential companions from disc observations. Recent work by Rodigas, Malhotra & Hinz presents how one can estimate the maximum mass and minimum semimajor axis of a hidden planet empirically from the width of the disc in scattered light. In this work, we use the predictions of Rodigas et al. applied to two debris discs HD 202628 and HD 207129. We aim to test if the predicted orbits of the planets can explain the features of their debris disc, such as eccentricity and sharp inner edge. We first run dynamical simulations using the predicted planetary parameters of Rodigas et al., and then numerically search for better parameters. Using a modified N-body code including radiation forces, we perform simulations over a broad range of planet parameters and compare synthetics images from our simulations to the observations. We find that the observational features of HD 202628 can be reproduced with a planet five times smaller than expected, located 30 AU beyond the predicted value, while the best match for HD 207129 is for a planet located 5-10 AU beyond the predicted location with a smaller eccentricity. We conclude that the predictions of Rodigas et al. provide a good starting point but should be complemented by numerical simulations.

  4. A Systematic Review of Predictions of Survival in Palliative Care: How Accurate Are Clinicians and Who Are the Experts?

    PubMed Central

    Harris, Adam; Harries, Priscilla

    2016-01-01

    overall accuracy being reported. Data were extracted using a standardised tool, by one reviewer, which could have introduced bias. Devising search terms for prognostic studies is challenging. Every attempt was made to devise search terms that were sufficiently sensitive to detect all prognostic studies; however, it remains possible that some studies were not identified. Conclusion Studies of prognostic accuracy in palliative care are heterogeneous, but the evidence suggests that clinicians’ predictions are frequently inaccurate. No sub-group of clinicians was consistently shown to be more accurate than any other. Implications of Key Findings Further research is needed to understand how clinical predictions are formulated and how their accuracy can be improved. PMID:27560380

  5. Mixing of a point-source indoor pollutant: Numerical predictions and comparison with experiments

    SciTech Connect

    Lobscheid, C.; Gadgil, A.J.

    2002-01-01

    In most practical estimates of indoor pollutant exposures, it is common to assume that the pollutant is uniformly and instantaneously mixed in the indoor space. It is also commonly known that this assumption is simplistic, particularly for point sources, and for short-term or localized indoor exposures. We report computational fluid dynamics (CFD) predictions of mixing time of a point-pulse release of a pollutant in an unventilated mechanically mixed isothermal room. We aimed to determine the adequacy of the standard RANS two-equation ({kappa}-{var_epsilon}) turbulence model to predict the mixing times under these conditions. The predictions were made for the twelve mixing time experiments performed by Drescher et al. (1995). We paid attention to adequate grid resolution, suppression of numerical diffusion, and careful simulation of the mechanical blowers used in the experiments. We found that the predictions are in good agreement with experimental measurements.

  6. Evidence that bisphenol A (BPA) can be accurately measured without contamination in human serum and urine, and that BPA causes numerous hazards from multiple routes of exposure.

    PubMed

    vom Saal, Frederick S; Welshons, Wade V

    2014-12-01

    There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources.

  7. Evidence that bisphenol A (BPA) can be accurately measured without contamination in human serum and urine, and that BPA causes numerous hazards from multiple routes of exposure

    PubMed Central

    vom Saal, Frederick S.; Welshons, Wade V.

    2016-01-01

    There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources. PMID:25304273

  8. Operational numerical weather prediction on the CYBER 205 at the National Meteorological Center

    NASA Technical Reports Server (NTRS)

    Deaven, D.

    1984-01-01

    The Development Division of the National Meteorological Center (NMC), having the responsibility of maintaining and developing the numerical weather forecasting systems of the center, is discussed. Because of the mission of NMC data products must be produced reliably and on time twice daily free of surprises for forecasters. Personnel of Development Division are in a rather unique situation. They must develop new advanced techniques for numerical analysis and prediction utilizing current state-of-the-art techniques, and implement them in an operational fashion without damaging the operations of the center. With the computational speeds and resources now available from the CYBER 205, Development Division Personnel will be able to introduce advanced analysis and prediction techniques into the operational job suite without disrupting the daily schedule. The capabilities of the CYBER 205 are discussed.

  9. Flood predictions using the parallel version of distributed numerical physical rainfall-runoff model TOPKAPI

    NASA Astrophysics Data System (ADS)

    Boyko, Oleksiy; Zheleznyak, Mark

    2015-04-01

    The original numerical code TOPKAPI-IMMS of the distributed rainfall-runoff model TOPKAPI ( Todini et al, 1996-2014) is developed and implemented in Ukraine. The parallel version of the code has been developed recently to be used on multiprocessors systems - multicore/processors PC and clusters. Algorithm is based on binary-tree decomposition of the watershed for the balancing of the amount of computation for all processors/cores. Message passing interface (MPI) protocol is used as a parallel computing framework. The numerical efficiency of the parallelization algorithms is demonstrated for the case studies for the flood predictions of the mountain watersheds of the Ukrainian Carpathian regions. The modeling results is compared with the predictions based on the lumped parameters models.

  10. One-level prediction-A numerical method for estimating undiscovered metal endowment

    USGS Publications Warehouse

    McCammon, R.B.; Kork, J.O.

    1992-01-01

    One-level prediction has been developed as a numerical method for estimating undiscovered metal endowment within large areas. The method is based on a presumed relationship between a numerical measure of geologic favorability and the spatial distribution of metal endowment. Metal endowment within an unexplored area for which the favorability measure is greater than a favorability threshold level is estimated to be proportional to the area of that unexplored portion. The constant of proportionality is the ratio of the discovered endowment found within a suitably chosen control region, which has been explored, to the area of that explored region. In addition to the estimate of undiscovered endowment, a measure of the error of the estimate is also calculated. One-level prediction has been used to estimate the undiscovered uranium endowment in the San Juan basin, New Mexico, U.S.A. A subroutine to perform the necessary calculations is included. ?? 1992 Oxford University Press.

  11. Development of a numerical method for the prediction of turbulent flows in dump diffusers

    NASA Astrophysics Data System (ADS)

    Ando, Yasunori; Kawai, Masafumi; Sato, Yukinori; Toh, Hidemi

    1987-01-01

    In order to obtain an effective tool to design dump diffusers for gas turbine combustors, a finite-volume numerical calculation method has been developed for the solution of two-dimensional/axisymmetric incompressible steady Navier-Stokes equation in general curvilinear coordinate system. This method was applied to the calculations of turbulent flows in a two-dimensional dump diffuser with uniform and distorted inlet velocity profiles as well as an annular dump diffuser with uniform inlet velocity profile, and the calculated results were compared with experimental data. The numerical results showed a good agreement with experimental data in case of both inlet velocity profiles; eventually, the numerical method was confirmed to be an effective tool for the development of dump diffusers which can predict the flow pattern, velocity distribution and the pressure loss.

  12. Simulation studies of proposed observing systems and their impact on numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Atlas, R.; Kalnay, E.; Susskind, J.; Baker, W. E.; Halem, M.

    1984-01-01

    A series of realistic simulation studies is being conducted as a cooperative effort between the European Centre for Medium Range Weather Forecasts (ECMWF), the National Meteorological Center (NMC), and the Goddard Laboratory for Atmospheric Sciences (GLAS) to provide a quantitative assessment of the potential impact of proposed observation systems on large scale numerical weather prediction. A special objective of this project is to avoid the unrealistic character of earlier simulation studies.

  13. Predictability of extreme weather events for NE U.S.: improvement of the numerical prediction using a Bayesian regression approach

    NASA Astrophysics Data System (ADS)

    Yang, J.; Astitha, M.; Anagnostou, E. N.; Hartman, B.; Kallos, G. B.

    2015-12-01

    Weather prediction accuracy has become very important for the Northeast U.S. given the devastating effects of extreme weather events in the recent years. Weather forecasting systems are used towards building strategies to prevent catastrophic losses for human lives and the environment. Concurrently, weather forecast tools and techniques have evolved with improved forecast skill as numerical prediction techniques are strengthened by increased super-computing resources. In this study, we examine the combination of two state-of-the-science atmospheric models (WRF and RAMS/ICLAMS) by utilizing a Bayesian regression approach to improve the prediction of extreme weather events for NE U.S. The basic concept behind the Bayesian regression approach is to take advantage of the strengths of two atmospheric modeling systems and, similar to the multi-model ensemble approach, limit their weaknesses which are related to systematic and random errors in the numerical prediction of physical processes. The first part of this study is focused on retrospective simulations of seventeen storms that affected the region in the period 2004-2013. Optimal variances are estimated by minimizing the root mean square error and are applied to out-of-sample weather events. The applicability and usefulness of this approach are demonstrated by conducting an error analysis based on in-situ observations from meteorological stations of the National Weather Service (NWS) for wind speed and wind direction, and NCEP Stage IV radar data, mosaicked from the regional multi-sensor for precipitation. The preliminary results indicate a significant improvement in the statistical metrics of the modeled-observed pairs for meteorological variables using various combinations of the sixteen events as predictors of the seventeenth. This presentation will illustrate the implemented methodology and the obtained results for wind speed, wind direction and precipitation, as well as set the research steps that will be

  14. The role of radiation-dynamics interaction in regional numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Chang, Chia-Bo

    1988-01-01

    The role of radiation-dynamics interaction in regional numerical weather prediction of severe storm environment and mesoscale convective systems over the United States is researched. Based upon the earlier numerical model simulation experiments, it is believed that such interaction can have a profound impact on the dynamics and thermodynamics of regional weather systems. The research will be carried out using real-data model forecast experiments performed on the Cray-X/MP computer. The forecasting system to be used is a comprehensive mesoscale prediction system which includes analysis and initialization, the dynamic model, and the post-forecast diagnosis codes. The model physics are currently undergoing many improvements in parameterizing radiation processes in the model atmosphere. The forecast experiments in conjunction with in-depth model verification and diagnosis are aimed at a quantitative understanding of the interaction between atmospheric radiation and regional dynamical processes in mesoscale models as well as in nature. Thus, significant advances in regional numerical weather prediction can be made. Results shall also provide valuable information for observational designs in the area of remote sensing techniques to study the characteristics of air-land thermal interaction and moist processes under various atmospheric conditions.

  15. Probe measurements and numerical model predictions of evolving size distributions in premixed flames

    SciTech Connect

    De Filippo, A.; Sgro, L.A.; Lanzuolo, G.; D'Alessio, A.

    2009-09-15

    Particle size distributions (PSDs), measured with a dilution probe and a Differential Mobility Analyzer (DMA), and numerical predictions of these PSDs, based on a model that includes only coagulation or alternatively inception and coagulation, are compared to investigate particle growth processes and possible sampling artifacts in the post-flame region of a C/O = 0.65 premixed laminar ethylene-air flame. Inputs to the numerical model are the PSD measured early in the flame (the initial condition for the aerosol population) and the temperature profile measured along the flame's axial centerline. The measured PSDs are initially unimodal, with a modal mobility diameter of 2.2 nm, and become bimodal later in the post-flame region. The smaller mode is best predicted with a size-dependent coagulation model, which allows some fraction of the smallest particles to escape collisions without resulting in coalescence or coagulation through the size-dependent coagulation efficiency ({gamma}{sub SD}). Instead, when {gamma} = 1 and the coagulation rate is equal to the collision rate for all particles regardless of their size, the coagulation model significantly under predicts the number concentration of both modes and over predicts the size of the largest particles in the distribution compared to the measured size distributions at various heights above the burner. The coagulation ({gamma}{sub SD}) model alone is unable to reproduce well the larger particle mode (mode II). Combining persistent nucleation with size-dependent coagulation brings the predicted PSDs to within experimental error of the measurements, which seems to suggest that surface growth processes are relatively insignificant in these flames. Shifting measured PSDs a few mm closer to the burner surface, generally adopted to correct for probe perturbations, does not produce a better matching between the experimental and the numerical results. (author)

  16. Pericenter precession induced by a circumstellar disk on the orbit of massive bodies: comparison between analytical predictions and numerical results

    NASA Astrophysics Data System (ADS)

    Fontana, A.; Marzari, F.

    2016-05-01

    Context. Planetesimals and planets embedded in a circumstellar disk are dynamically perturbed by the disk gravity. It causes an apsidal line precession at a rate that depends on the disk density profile and on the distance of the massive body from the star. Aims: Different analytical models are exploited to compute the precession rate of the perihelion ϖ˙. We compare them to verify their equivalence, in particular after analytical manipulations performed to derive handy formulas, and test their predictions against numerical models in some selected cases. Methods: The theoretical precession rates were computed with analytical algorithms found in the literature using the Mathematica symbolic code, while the numerical simulations were performed with the hydrodynamical code FARGO. Results: For low-mass bodies (planetesimals) the analytical approaches described in Binney & Tremaine (2008, Galactic Dynamics, p. 96), Ward (1981, Icarus, 47, 234), and Silsbee & Rafikov (2015a, ApJ, 798, 71) are equivalent under the same initial conditions for the disk in terms of mass, density profile, and inner and outer borders. They also match the numerical values computed with FARGO away from the outer border of the disk reasonably well. On the other hand, the predictions of the classical Mestel disk (Mestel 1963, MNRAS, 126, 553) for disks with p = 1 significantly depart from the numerical solution for radial distances beyond one-third of the disk extension because of the underlying assumption of the Mestel disk is that the outer disk border is equal to infinity. For massive bodies such as terrestrial and giant planets, the agreement of the analytical approaches is progressively poorer because of the changes in the disk structure that are induced by the planet gravity. For giant planets the precession rate changes sign and is higher than the modulus of the theoretical value by a factor ranging from 1.5 to 1.8. In this case, the correction of the formula proposed by Ward (1981) to

  17. A novel stress-accurate FE technology for highly non-linear analysis with incompressibility constraint. Application to the numerical simulation of the FSW process

    NASA Astrophysics Data System (ADS)

    Chiumenti, M.; Cervera, M.; Agelet de Saracibar, C.; Dialami, N.

    2013-05-01

    In this work a novel finite element technology based on a three-field mixed formulation is presented. The Variational Multi Scale (VMS) method is used to circumvent the LBB stability condition allowing the use of linear piece-wise interpolations for displacement, stress and pressure fields, respectively. The result is an enhanced stress field approximation which enables for stress-accurate results in nonlinear computational mechanics. The use of an independent nodal variable for the pressure field allows for an adhoc treatment of the incompressibility constraint. This is a mandatory requirement due to the isochoric nature of the plastic strain in metal forming processes. The highly non-linear stress field typically encountered in the Friction Stir Welding (FSW) process is used as an example to show the performance of this new FE technology. The numerical simulation of the FSW process is tackled by means of an Arbitrary-Lagrangian-Eulerian (ALE) formulation. The computational domain is split into three different zones: the work.piece (defined by a rigid visco-plastic behaviour in the Eulerian framework), the pin (within the Lagrangian framework) and finally the stirzone (ALE formulation). A fully coupled thermo-mechanical analysis is introduced showing the heat fluxes generated by the plastic dissipation in the stir-zone (Sheppard rigid-viscoplastic constitutive model) as well as the frictional dissipation at the contact interface (Norton frictional contact model). Finally, tracers have been implemented to show the material flow around the pin allowing a better understanding of the welding mechanism. Numerical results are compared with experimental evidence.

  18. Numerical prediction of vortex cores of the leading and trailing edges of delta wings

    NASA Technical Reports Server (NTRS)

    Kandil, O. A.

    1980-01-01

    The purpose of the present paper is to predict the roll-up of the vortex sheets emanating from the leading- and trailing-edges of delta wings with emphasis on the interaction of vortex cores beyond the trailing edge. The motivation behind the present work is the recent experimental data published by Hummel. The Nonlinear Discrete-Vortex method (NDV-method) is modified and extended to predict the leading- and trailing-vortex cores beyond the trailing edge. The present model alleviates the problems previously encountered in predicting satisfactory pressure distributions. This is accomplished by lumping the free-vortex lines during the iteration procedure. The leading- and trailing-edge cores and their feeding sheets are obtained as parts of the solution. The numerical results show that the NDV-method is successful in confirming the formation of a trailing-edge core with opposite circulation and opposite roll-up to those of the leading-edge core. This work is a breakthrough in the high angle of attack aerodynamics and moreover, it is the first numerical prediction done on this problem

  19. The Effect of Element Formulation on the Prediction of Boost Effects in Numerical Tube Bending

    SciTech Connect

    Bardelcik, A.; Worswick, M.J.

    2005-08-05

    This paper presents advanced FE models of the pre-bending process to investigate the effect of element formulation on the prediction of boost effects in tube bending. Tube bending experiments are conducted with 3'' (OD) IF (Interstitial-Free) steel tube on a fully instrumented Eagle EPT-75 servo-hydraulic mandrel-rotary draw tube bender. Experiments were performed in which the bending boost was varied at three levels and resulted in consistent trends in the strain and thickness distribution within the pre-bent tubes. A numerical model of the rotary draw tube bender was used to simulate pre-bending of the IF tube with the three levels of boost from the experiments. To examine the effect of element formulation on the prediction of boost, the tube was modeled with shell and solid elements. Both models predicted the overall strain and thickness results well, but showed different trends in each of the models.

  20. Three dimensional numerical prediction of icing related power and energy losses on a wind turbine

    NASA Astrophysics Data System (ADS)

    Sagol, Ece

    , while the latter performs all the steps in the 3D domain. The Fully-3D method yields more accurate predictions for a clean blade. For icing conditions, a validation is not possible, owing to the lack of experimental data. However, the two methods produce quite different results for the performance of the ice shape and the iced blade. A critical analysis of the results shows that, although the computational cost of the Fully-3D method is much higher, icing analyses in 2D may lack accuracy, because the ice shape and the related power loss are compromised by not considering the 3D features of rotational flow. While performing the CFD computations on the iced blade, the rough surface of the ice is smoothed to a degree, in order to prevent numerical instability and to keep the mesh size within a reasonable limit. However, roughness effects cannot be excluded altogether, as they contribute significantly to performance reduction. We consider roughness through a modification in the CFD code, and assess its effect on performance for the clean blade.

  1. Numerical predictions and experimental results of a dry bay fire environment.

    SciTech Connect

    Suo-Anttila, Jill Marie; Gill, Walter; Black, Amalia Rebecca

    2003-11-01

    The primary objective of the Safety and Survivability of Aircraft Initiative is to improve the safety and survivability of systems by using validated computational models to predict the hazard posed by a fire. To meet this need, computational model predictions and experimental data have been obtained to provide insight into the thermal environment inside an aircraft dry bay. The calculations were performed using the Vulcan fire code, and the experiments were completed using a specially designed full-scale fixture. The focus of this report is to present comparisons of the Vulcan results with experimental data for a selected test scenario and to assess the capability of the Vulcan fire field model to accurately predict dry bay fire scenarios. Also included is an assessment of the sensitivity of the fire model predictions to boundary condition distribution and grid resolution. To facilitate the comparison with experimental results, a brief description of the dry bay fire test fixture and a detailed specification of the geometry and boundary conditions are included. Overall, the Vulcan fire field model has shown the capability to predict the thermal hazard posed by a sustained pool fire within a dry bay compartment of an aircraft; although, more extensive experimental data and rigorous comparison are required for model validation.

  2. The Essential Terrestrial Variables (ETV's) in Support of a National Framework for Numerical Watershed Prediction

    NASA Astrophysics Data System (ADS)

    Duffy, C.; Leonard, L. N.; Ahalt, S.; Idaszak, R.; Tarboton, D.; Hooper, R. P.; Band, L. E.

    2012-12-01

    There is a clear national need to provide geoscience researchers with seamless and fast access to essential geo-spatial/geo-temporal data to support physics-based numerical models necessary to understand, predict and manage the nations surface and groundwater resources. Fundamental advances in science such as the evaluation of ecosystem and watershed services, the detection and attribution of the impact of climatic change, represent examples that will require high resolution spatially explicit assessments. In this paper we propose the concept of Essential Terrestrial Variables (ETV's), which we define as those variables that are nominally required to support watershed/catchment numerical prediction anywhere in the continental US and ultimately at the global scale. ETV's would represent a fundamental community resource necessary to build the products/parameters/forcings commonly used in distributed, fully-coupled watershed and river basin models. We argue that there are at last 3 fundamental issues that must be resolved before implementation of ETV's in support of a national water model: 1) data access and accessibility, 2) data scale and scalability, 3) community provenance and data sustainability. At the present time, there is no unified data infrastructure for supporting watershed models, and the data resource itself (weather/climate reanalysis products, stream flow, groundwater, soils, land cover, satellite data products, etc.) resides on many federal servers with limited or poorly organized access, with many data formats and without common geo-referencing. Beyond the problem of access to national data, the scale and scalability of computation for both data processing and model computational represents a major hurdle. This predicament is especially true since a full-scale national strategy for numerical watershed prediction will require data resources to reside very close to numerical model computation. Finally model/data provenance should be sufficient to allow

  3. Near-fault earthquake ground motion prediction by a high-performance spectral element numerical code

    SciTech Connect

    Paolucci, Roberto; Stupazzini, Marco

    2008-07-08

    Near-fault effects have been widely recognised to produce specific features of earthquake ground motion, that cannot be reliably predicted by 1D seismic wave propagation modelling, used as a standard in engineering applications. These features may have a relevant impact on the structural response, especially in the nonlinear range, that is hard to predict and to be put in a design format, due to the scarcity of significant earthquake records and of reliable numerical simulations. In this contribution a pilot study is presented for the evaluation of seismic ground-motions in the near-fault region, based on a high-performance numerical code for 3D seismic wave propagation analyses, including the seismic fault, the wave propagation path and the near-surface geological or topographical irregularity. For this purpose, the software package GeoELSE is adopted, based on the spectral element method. The set-up of the numerical benchmark of 3D ground motion simulation in the valley of Grenoble (French Alps) is chosen to study the effect of the complex interaction between basin geometry and radiation mechanism on the variability of earthquake ground motion.

  4. On vortex loops and filaments: three examples of numerical predictions of flows containing vortices.

    PubMed

    Krause, Egon

    2003-01-01

    Vortex motion plays a dominant role in many flow problems. This article aims at demonstrating some of the characteristic features of vortices with the aid of numerical solutions of the governing equations of fluid mechanics, the Navier-Stokes equations. Their discretized forms will first be reviewed briefly. Thereafter three problems of fluid flow involving vortex loops and filaments are discussed. In the first, the time-dependent motion and the mutual interaction of two colliding vortex rings are discussed, predicted in good agreement with experimental observations. The second example shows how vortex rings are generated, move, and interact with each other during the suction stroke in the cylinder of an automotive engine. The numerical results, validated with experimental data, suggest that vortex rings can be used to influence the spreading of the fuel droplets prior to ignition and reduce the fuel consumption. In the third example, it is shown that vortices can also occur in aerodynamic flows over delta wings at angle of attack as well as pipe flows: of particular interest for technical applications of these flows is the situation in which the vortex cores are destroyed, usually referred to as vortex breakdown or bursting. Although reliable breakdown criteria could not be established as yet, the numerical predictions obtained so far are found to agree well with the few experimental data available in the recent literature.

  5. Post audit of a numerical prediction of wellfield drawdown in a semiconfined aquifer system

    USGS Publications Warehouse

    Stewart, M.; Langevin, C.

    1999-01-01

    A numerical ground water flow model was created in 1978 and revised in 1981 to predict the drawdown effects of a proposed municipal wellfield permitted to withdraw 30 million gallons per day (mgd; 1.1 x 105 m3/day) of water from the semiconfined Floridan Aquifer system. The predictions are based on the assumption that water levels in the semiconfined Floridan Aquifer reach a long-term, steady-state condition within a few days of initiation of pumping. Using this assumption, a 75 day simulation without water table recharge, pumping at the maximum permitted rates, was considered to represent a worst-case condition and the greatest drawdowns that could be experienced during wellfield operation. This method of predicting wellfield effects was accepted by the permitting agency. For this post audit, observed drawdowns were derived by taking the difference between pre-pumping and post-pumping potentiometric surface levels. Comparison of predicted and observed drawdowns suggests that actual drawdown over a 12 year period exceeds predicted drawdown by a factor of two or more. Analysis of the source of error in the 1981 predictions suggests that the values used for transmissivity, storativity, specific yield, and leakance are reasonable at the wellfield scale. Simulation using actual 1980-1992 pumping rates improves the agreement between predicted and observed drawdowns. The principal source of error is the assumption that water levels in a semiconfined aquifer achieve a steady-state condition after a few days or weeks of pumping. Simulations using a version of the 1981 model modified to include recharge and evapotranspiration suggest that it can take hundreds of days or several years for water levels in the linked Surficial and Floridan Aquifers to reach an apparent steady-state condition, and that slow declines in levels continue for years after the initiation of pumping. While the 1981 'impact' model can be used for reasonably predicting short-term, wellfield

  6. Ductile damage prediction in metal forming processes: Advanced modeling and numerical simulation

    NASA Astrophysics Data System (ADS)

    Saanouni, K.

    2013-05-01

    This paper describes the needs required in modern virtual metal forming including both sheet and bulk metal forming of mechanical components. These concern the advanced modeling of thermo-mechanical behavior including the multiphysical phenomena and their interaction or strong coupling, as well as the associated numerical aspects using fully adaptive simulation strategies. First a survey of advanced constitutive equations accounting for the main thermomechanical phenomena as the thermo-elasto-plastic finite strains with isotropic and kinematic hardenings fully coupled with ductile damage will be presented. Only the macroscopic phenomenological approach with state variables (monoscale approach) will be discussed in the general framework of the rational thermodynamics for generalized micromorphic continua. The micro-macro (multi-scales approach) in the framework of polycrystalline inelasticity is not presented here for the sake of shortness but will be presented during the oral presentation. The main numerical aspects related to the resolution of the associated initial and boundary value problem will be outlined. A fully adaptive numerical methodology will be briefly described and some numerical examples will be given in order to show the high predictive capabilities of this adaptive methodology for virtual metal forming simulations.

  7. Benchmarking numerical predictions with force and moment measurements on slender, supercavitating bodies

    NASA Astrophysics Data System (ADS)

    Hailey, C. E.; Clark, E. L.; Cole, J. K.

    High-speed water-entry is a very complex, dynamic process. As a first attempt at modeling the process, a numerical solution was developed at Sandia National Laboratories for predicting the forces and moments acting on a body with a steady supercavity, that is, a cavity which extends beyond the base of the body. The solution is limited to supercavities on slender, axisymmetric bodies at small angles of attack. Limited data were available with which to benchmark the axial force predictions at zero angle of attack. Even less data were available with which to benchmark the pitching moment and normal force predictions at nonzero angles of attack. A water tunnel test was conducted to obtain force and moment data on a slender shape. This test produced limited data because of waterproofing problems with the balance. A new balance was designed and a second water tunnel test was conducted at Tracor Hydronautics, Inc. This paper describes the numerical solution, the experimental equipment and test procedures, and the results of the second test.

  8. Melt-rock reaction in the asthenospheric mantle: Perspectives from high-order accurate numerical simulations in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Tirupathi, S.; Schiemenz, A. R.; Liang, Y.; Parmentier, E.; Hesthaven, J.

    2013-12-01

    The style and mode of melt migration in the mantle are important to the interpretation of basalts erupted on the surface. Both grain-scale diffuse porous flow and channelized melt migration have been proposed. To better understand the mechanisms and consequences of melt migration in a heterogeneous mantle, we have undertaken a numerical study of reactive dissolution in an upwelling and viscously deformable mantle where solubility of pyroxene increases upwards. Our setup is similar to that described in [1], except we use a larger domain size in 2D and 3D and a new numerical method. To enable efficient simulations in 3D through parallel computing, we developed a high-order accurate numerical method for the magma dynamics problem using discontinuous Galerkin methods and constructed the problem using the numerical library deal.II [2]. Linear stability analyses of the reactive dissolution problem reveal three dynamically distinct regimes [3] and the simulations reported in this study were run in the stable regime and the unstable wave regime where small perturbations in porosity grows periodically. The wave regime is more relevant to melt migration beneath the mid-ocean ridges but computationally more challenging. Extending the 2D simulations in the stable regime in [1] to 3D using various combinations of sustained perturbations in porosity at the base of the upwelling column (which may result from a viened mantle), we show the geometry and distribution of dunite channel and high-porosity melt channels are highly correlated with inflow perturbation through superposition. Strong nonlinear interactions among compaction, dissolution, and upwelling give rise to porosity waves and high-porosity melt channels in the wave regime. These compaction-dissolution waves have well organized but time-dependent structures in the lower part of the simulation domain. High-porosity melt channels nucleate along nodal lines of the porosity waves, growing downwards. The wavelength scales

  9. Prediction of Ship Response Statistics in Extreme Seas Using Model Tests Data and Numerical Simulations

    NASA Astrophysics Data System (ADS)

    Guo, Bingjie; Bitner-Gregersen, Elzbieta Maria; Sun, Hui; Block Helmers, Jens

    2013-04-01

    Earlier investigations have indicated that proper prediction of nonlinear loads and responses due to nonlinear waves is important for ship safety in extreme seas. However, the nonlinear loads and responses in extreme seas have not been sufficiently investigated yet, particularly when rogue waves are considered. A question remains whether the existing linear codes can predict nonlinear loads and responses with a satisfactory accuracy and how large the deviations from linear predictions are. To indicate it response statistics have been studied based on the model tests carried out with a LNG tanker in the towing tank of the Technical University of Berlin (TUB), and compared with the statistics derived from numerical simulations using the DNV code WASIM. It is a potential code for wave-ship interaction based on 3D Panel method, which can perform both linear and nonlinear simulation. The numerical simulations with WASIM and the model tests in extreme and rogue waves have been performed. The analysis of ship motions (heave and pitch) and bending moments, in both regular and irregular waves, is performed. The results from the linear and nonlinear simulations are compared with experimental data to indicate the impact of wave non-linearity on loads and response calculations when the code based on the Rankine Panel Method is used. The study shows that nonlinearities may have significant effect on extreme motions and bending moment generated by strongly nonlinear waves. The effect of water depth on ship responses is also demonstrated using numerical simulations. Uncertainties related to the results are discussed, giving particular attention to sampling variability.

  10. Numerical prediction of energy consumption in buildings with controlled interior temperature

    SciTech Connect

    Jarošová, P.; Št’astník, S.

    2015-03-10

    New European directives bring strong requirement to the energy consumption of building objects, supporting the renewable energy sources. Whereas in the case of family and similar houses this can lead up to absurd consequences, for building objects with controlled interior temperature the optimization of energy demand is really needed. The paper demonstrates the system approach to the modelling of thermal insulation and accumulation abilities of such objetcs, incorporating the significant influence of additional physical processes, as surface heat radiation and moisture-driven deterioration of insulation layers. An illustrative example shows the numerical prediction of energy consumption of a freezing plant in one Central European climatic year.

  11. Memory efficient solution of the primitive equations for numerical weather prediction on the CYBER 205

    NASA Technical Reports Server (NTRS)

    Tuccillo, J. J.

    1984-01-01

    Numerical Weather Prediction (NWP), for both operational and research purposes, requires only fast computational speed but also large memory. A technique for solving the Primitive Equations for atmospheric motion on the CYBER 205, as implemented in the Mesoscale Atmospheric Simulation System, which is fully vectorized and requires substantially less memory than other techniques such as the Leapfrog or Adams-Bashforth Schemes is discussed. The technique presented uses the Euler-Backard time marching scheme. Also discussed are several techniques for reducing computational time of the model by replacing slow intrinsic routines by faster algorithms which use only hardware vector instructions.

  12. Simulation studies of the impact of advanced observing systems on numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Atlas, R.; Kalnay, E.; Susskind, J.; Reuter, D.; Baker, W. E.; Halem, M.

    1984-01-01

    To study the potential impact of advanced passive sounders and lidar temperature, pressure, humidity, and wind observing systems on large-scale numerical weather prediction, a series of realistic simulation studies between the European Center for medium-range weather forecasts, the National Meteorological Center, and the Goddard Laboratory for Atmospheric Sciences is conducted. The project attempts to avoid the unrealistic character of earlier simulation studies. The previous simulation studies and real-data impact tests are reviewed and the design of the current simulation system is described. Consideration is given to the simulation of observations of space-based sounding systems.

  13. A lateral boundary formulation for multi-level prediction models. [numerical weather forecasting

    NASA Technical Reports Server (NTRS)

    Davies, H. C.

    1976-01-01

    A method is proposed for treating the lateral boundaries of a limited-area weather prediction model. The method involves the relaxation of the interior flow in the vicinity of the boundary to the external fully prescribed flow. Analytical and numerical results obtained with a linearized multilevel model confirm the effectiveness of this computationally effective method. The method is shown to give an adequate representation of outgoing gravity waves with and without an ambient shear flow and to allow the substantially undistorted transmission of geostrophically balanced flow out of the interior of the limited domain.

  14. Updating prediction models by dynamical relaxation - An examination of the technique. [for numerical weather forecasting

    NASA Technical Reports Server (NTRS)

    Davies, H. C.; Turner, R. E.

    1977-01-01

    A dynamical relaxation technique for updating prediction models is analyzed with the help of the linear and nonlinear barotropic primitive equations. It is assumed that a complete four-dimensional time history of some prescribed subset of the meteorological variables is known. The rate of adaptation of the flow variables toward the true state is determined for a linearized f-model, and for mid-latitude and equatorial beta-plane models. The results of the analysis are corroborated by numerical experiments with the nonlinear shallow-water equations.

  15. The influence of source-receiver interaction on the numerical prediction of railway induced vibrations

    NASA Astrophysics Data System (ADS)

    Coulier, P.; Lombaert, G.; Degrande, G.

    2014-06-01

    The numerical prediction of vibrations in buildings due to railway traffic is a complicated problem where wave propagation in the soil couples the source (railway tunnel or track) and the receiver (building). This through-soil coupling is often neglected in state-of-the-art numerical models in order to reduce the computational cost. In this paper, the effect of this simplifying assumption on the accuracy of numerical predictions is investigated. A coupled finite element-boundary element methodology is employed to analyze the interaction between a building and a railway tunnel at depth or a ballasted track at the surface of a homogeneous halfspace, respectively. Three different soil types are considered. It is demonstrated that the dynamic axle loads can be calculated with reasonable accuracy using an uncoupled strategy in which through-soil coupling is disregarded. If the transfer functions from source to receiver are considered, however, large local variations in terms of vibration insertion gain are induced by source-receiver interaction, reaching up to 10 dB and higher, although the overall wave field is only moderately affected. A global quantification of the significance of through-soil coupling is made, based on the mean vibrational energy entering a building. This approach allows assessing the common assumption in seismic engineering that source-receiver interaction can be neglected if the distance between source and receiver is sufficiently large compared to the wavelength of waves in the soil. It is observed that the interaction between a source at depth and a receiver mainly affects the power flow distribution if the distance between source and receiver is smaller than the dilatational wavelength in the soil. Interaction effects for a railway track at grade are observed if the source-receiver distance is smaller than six Rayleigh wavelengths. A similar trend is revealed if the passage of a freight train is considered. The overall influence of dynamic

  16. On the dynamic estimation of relative weights for observation and forecast in numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Wahba, Grace; Deepak, A. (Editor)

    1988-01-01

    The problem of merging direct and remotely sensed (indirect) data with forecast data to get an estimate of the present state of the atmosphere for the purpose of numerical weather prediction is examined. To carry out this merging optimally, it is necessary to provide an estimate of the relative weights to be given to the observations and forecast. It is possible to do this dynamically from the information to be merged, if the correlation structure of the errors from the various sources is sufficiently different. Some new statistical approaches to doing this are described, and conditions quantified in which such estimates are likely to be good.

  17. Numerical prediction of the thermodynamic properties of ternary Al-Ni-Hf alloys

    NASA Astrophysics Data System (ADS)

    Romanowska, Jolanta; Kotowski, Sławomir; Zagula-Yavorska, Maryana

    2014-10-01

    Thermodynamic properties of ternary Al-Hf-Ni system, such as exG, μAl, μNi and μZr at 1373K were predicted on the basis of thermodynamic properties of binary systems included in the investigated ternary system. The idea of predicting exG values was regarded as the calculation of excess Gibbs energy values inside a certain area (a Gibbs triangle) unless all boundary conditions, that is values of exG on all legs of the triangle are known. exG and Lijk ternary interaction parameters in the Muggianu extension of the Redlich-Kister formalism are calculated numerically using Wolfram Mathematica 9 software.

  18. Numerical prediction of the thermodynamic properties of ternary Al-Ni-Hf alloys

    SciTech Connect

    Romanowska, Jolanta; Kotowski, Sławomir; Zagula-Yavorska, Maryana

    2014-10-06

    Thermodynamic properties of ternary Al-Hf-Ni system, such as {sup ex}G, μ{sub Al}, μ{sub Ni} and μ{sub Zr} at 1373K were predicted on the basis of thermodynamic properties of binary systems included in the investigated ternary system. The idea of predicting {sup ex}G values was regarded as the calculation of excess Gibbs energy values inside a certain area (a Gibbs triangle) unless all boundary conditions, that is values of {sup ex}G on all legs of the triangle are known. {sup ex}G and L{sub ijk} ternary interaction parameters in the Muggianu extension of the Redlich-Kister formalism are calculated numerically using Wolfram Mathematica 9 software.

  19. Operational numerical weather prediction on a GPU-accelerated cluster supercomputer

    NASA Astrophysics Data System (ADS)

    Lapillonne, Xavier; Fuhrer, Oliver; Spörri, Pascal; Osuna, Carlos; Walser, André; Arteaga, Andrea; Gysi, Tobias; Rüdisühli, Stefan; Osterried, Katherine; Schulthess, Thomas

    2016-04-01

    The local area weather prediction model COSMO is used at MeteoSwiss to provide high resolution numerical weather predictions over the Alpine region. In order to benefit from the latest developments in computer technology the model was optimized and adapted to run on Graphical Processing Units (GPUs). Thanks to these model adaptations and the acquisition of a dedicated hybrid supercomputer a new set of operational applications have been introduced, COSMO-1 (1 km deterministic), COSMO-E (2 km ensemble) and KENDA (data assimilation) at MeteoSwiss. These new applications correspond to an increase of a factor 40x in terms of computational load as compared to the previous operational setup. We present an overview of the porting approach of the COSMO model to GPUs together with a detailed description of and performance results on the new hybrid Cray CS-Storm computer, Piz Kesch.

  20. Defect reaction network in Si-doped InAs. Numerical predictions.

    SciTech Connect

    Schultz, Peter A.

    2015-05-01

    This Report characterizes the defects in the def ect reaction network in silicon - doped, n - type InAs predicted with first principles density functional theory. The reaction network is deduced by following exothermic defect reactions starting with the initially mobile interstitial defects reacting with common displacement damage defects in Si - doped InAs , until culminating in immobile reaction p roducts. The defect reactions and reaction energies are tabulated, along with the properties of all the silicon - related defects in the reaction network. This Report serves to extend the results for the properties of intrinsic defects in bulk InAs as colla ted in SAND 2013 - 2477 : Simple intrinsic defects in InAs : Numerical predictions to include Si - containing simple defects likely to be present in a radiation - induced defect reaction sequence . This page intentionally left blank

  1. PSSP-RFE: Accurate Prediction of Protein Structural Class by Recursive Feature Extraction from PSI-BLAST Profile, Physical-Chemical Property and Functional Annotations

    PubMed Central

    Yu, Sanjiu; Zhang, Yuan; Luo, Zhong; Yang, Hua; Zhou, Yue; Zheng, Xiaoqi

    2014-01-01

    Protein structure prediction is critical to functional annotation of the massively accumulated biological sequences, which prompts an imperative need for the development of high-throughput technologies. As a first and key step in protein structure prediction, protein structural class prediction becomes an increasingly challenging task. Amongst most homological-based approaches, the accuracies of protein structural class prediction are sufficiently high for high similarity datasets, but still far from being satisfactory for low similarity datasets, i.e., below 40% in pairwise sequence similarity. Therefore, we present a novel method for accurate and reliable protein structural class prediction for both high and low similarity datasets. This method is based on Support Vector Machine (SVM) in conjunction with integrated features from position-specific score matrix (PSSM), PROFEAT and Gene Ontology (GO). A feature selection approach, SVM-RFE, is also used to rank the integrated feature vectors through recursively removing the feature with the lowest ranking score. The definitive top features selected by SVM-RFE are input into the SVM engines to predict the structural class of a query protein. To validate our method, jackknife tests were applied to seven widely used benchmark datasets, reaching overall accuracies between 84.61% and 99.79%, which are significantly higher than those achieved by state-of-the-art tools. These results suggest that our method could serve as an accurate and cost-effective alternative to existing methods in protein structural classification, especially for low similarity datasets. PMID:24675610

  2. Flow of variably fluidized granular masses across three-dimensional terrain 2. Numerical predictions and experimental tests

    USGS Publications Warehouse

    Denlinger, R.P.; Iverson, R.M.

    2001-01-01

    Numerical solutions of the equations describing flow of variably fluidized Coulomb mixtures predict key features of dry granular avalanches and water-saturated debris flows measured in physical experiments. These features include time-dependent speeds, depths, and widths of flows as well as the geometry of resulting deposits. Threedimensional (3-D) boundary surfaces strongly influence flow dynamics because transverse shearing and cross-stream momentum transport occur where topography obstructs or redirects motion. Consequent energy dissipation can cause local deceleration and deposition, even on steep slopes. Velocities of surge fronts and other discontinuities that develop as flows cross 3-D terrain are predicted accurately by using a Riemann solution algorithm. The algorithm employs a gravity wave speed that accounts for different intensities of lateral stress transfer in regions of extending and compressing flow and in regions with different degrees of fluidization. Field observations and experiments indicate that flows in which fluid plays a significant role typically have high-friction margins with weaker interiors partly fluidized by pore pressure. Interaction of the strong perimeter and weak interior produces relatively steep-sided, flat-topped deposits. To simulate these effects, we compute pore pressure distributions using an advection-diffusion model with enhanced diffusivity near flow margins. Although challenges remain in evaluating pore pressure distributions in diverse geophysical flows, Riemann solutions of the depthaveraged 3-D Coulomb mixture equations provide a powerful tool for interpreting and predicting flow behavior. They provide a means of modeling debris flows, rock avalanches, pyroclastic flows, and related phenomena without invoking and calibrating Theological parameters that have questionable physical significance.

  3. Numerical Simulation and Artificial Neural Network Modeling for Predicting Welding-Induced Distortion in Butt-Welded 304L Stainless Steel Plates

    NASA Astrophysics Data System (ADS)

    Narayanareddy, V. V.; Chandrasekhar, N.; Vasudevan, M.; Muthukumaran, S.; Vasantharaja, P.

    2016-02-01

    In the present study, artificial neural network modeling has been employed for predicting welding-induced angular distortions in autogenous butt-welded 304L stainless steel plates. The input data for the neural network have been obtained from a series of three-dimensional finite element simulations of TIG welding for a wide range of plate dimensions. Thermo-elasto-plastic analysis was carried out for 304L stainless steel plates during autogenous TIG welding employing double ellipsoidal heat source. The simulated thermal cycles were validated by measuring thermal cycles using thermocouples at predetermined positions, and the simulated distortion values were validated by measuring distortion using vertical height gauge for three cases. There was a good agreement between the model predictions and the measured values. Then, a multilayer feed-forward back propagation neural network has been developed using the numerically simulated data. Artificial neural network model developed in the present study predicted the angular distortion accurately.

  4. Numerical predictions of the turbulent cavitating flow around a marine propeller and an axial turbine

    NASA Astrophysics Data System (ADS)

    Morgut, M.; Jošt, D.; Nobile, E.; Škerlavaj, A.

    2015-12-01

    The numerical predictions of cavitating flow around a marine propeller working in non-uniform inflow and an axial turbine are presented. The cavitating flow is modelled using the homogeneous (mixture) model. Time-dependent simulations are performed for the marine propeller case using OpenFOAM. Three calibrated mass transfer models are alternatively used to model the mass transfer rate due to cavitation and the two-equation SST (Shear Stress Transport) turbulence model is employed to close the system of the governing equations. The predictions of the cavitating flow in an axial turbine are carried out with ANSYS-CFX, where only the native mass transfer model with tuned parameters is used. Steady-state simulations are performed in combination with the SST turbulence model, while time-dependent results are obtained with the more advanced SAS (Scale Adaptive Simulation) SST model. The numerical results agree well with the available experimental measurements, and the simulations performed with the three different calibrated mass transfer models are close to each other for the propeller flow. Regarding the axial turbine the effect of the cavitation on the machine efficiency is well reproduced only by the time dependent simulations.

  5. Development of numerical model for predicting heat generation and temperatures in MSW landfills.

    PubMed

    Hanson, James L; Yeşiller, Nazli; Onnen, Michael T; Liu, Wei-Lien; Oettle, Nicolas K; Marinos, Janelle A

    2013-10-01

    A numerical modeling approach has been developed for predicting temperatures in municipal solid waste landfills. Model formulation and details of boundary conditions are described. Model performance was evaluated using field data from a landfill in Michigan, USA. The numerical approach was based on finite element analysis incorporating transient conductive heat transfer. Heat generation functions representing decomposition of wastes were empirically developed and incorporated to the formulation. Thermal properties of materials were determined using experimental testing, field observations, and data reported in literature. The boundary conditions consisted of seasonal temperature cycles at the ground surface and constant temperatures at the far-field boundary. Heat generation functions were developed sequentially using varying degrees of conceptual complexity in modeling. First a step-function was developed to represent initial (aerobic) and residual (anaerobic) conditions. Second, an exponential growth-decay function was established. Third, the function was scaled for temperature dependency. Finally, an energy-expended function was developed to simulate heat generation with waste age as a function of temperature. Results are presented and compared to field data for the temperature-dependent growth-decay functions. The formulations developed can be used for prediction of temperatures within various components of landfill systems (liner, waste mass, cover, and surrounding subgrade), determination of frost depths, and determination of heat gain due to decomposition of wastes. PMID:23664656

  6. Development of numerical model for predicting heat generation and temperatures in MSW landfills.

    PubMed

    Hanson, James L; Yeşiller, Nazli; Onnen, Michael T; Liu, Wei-Lien; Oettle, Nicolas K; Marinos, Janelle A

    2013-10-01

    A numerical modeling approach has been developed for predicting temperatures in municipal solid waste landfills. Model formulation and details of boundary conditions are described. Model performance was evaluated using field data from a landfill in Michigan, USA. The numerical approach was based on finite element analysis incorporating transient conductive heat transfer. Heat generation functions representing decomposition of wastes were empirically developed and incorporated to the formulation. Thermal properties of materials were determined using experimental testing, field observations, and data reported in literature. The boundary conditions consisted of seasonal temperature cycles at the ground surface and constant temperatures at the far-field boundary. Heat generation functions were developed sequentially using varying degrees of conceptual complexity in modeling. First a step-function was developed to represent initial (aerobic) and residual (anaerobic) conditions. Second, an exponential growth-decay function was established. Third, the function was scaled for temperature dependency. Finally, an energy-expended function was developed to simulate heat generation with waste age as a function of temperature. Results are presented and compared to field data for the temperature-dependent growth-decay functions. The formulations developed can be used for prediction of temperatures within various components of landfill systems (liner, waste mass, cover, and surrounding subgrade), determination of frost depths, and determination of heat gain due to decomposition of wastes.

  7. Numerical predictions of viscoelastic properties and dynamic moduli of innovative pothole patching materials

    NASA Astrophysics Data System (ADS)

    Yuan, K. Y.; Yuan, W.; Ju, J. W.; Yang, J. M.; Kao, W.; Carlson, L.

    2013-04-01

    As asphalt pavements age and deteriorate, recurring pothole repair failures and propagating alligator cracks in the asphalt pavements have become a serious issue to our daily life and resulted in high repairing costs for pavement and vehicles. To solve this urgent issue, pothole repair materials with superior durability and long service life are needed. In the present work, revolutionary pothole patching materials with high toughness, high fatigue resistance that are reinforced with nano-molecular resins have been developed to enhance their resistance to traffic loads and service life of repaired potholes. In particular, DCPD resin (dicyclopentadiene, C10H12) with a Rhuthinium-based catalyst is employed to develop controlled properties that are compatible with aggregates and asphalt binders. In this paper, a multi-level numerical micromechanics-based model is developed to predict the viscoelastic properties and dynamic moduli of these innovative nano-molecular resin reinforced pothole patching materials. Irregular coarse aggregates in the finite element analysis are modeled as randomly-dispersed multi-layers coated particles. The effective properties of asphalt mastic, which consists of fine aggregates, tar, cured DCPD and air voids are theoretically estimated by the homogenization technique of micromechanics in conjunction with the elastic-viscoelastic correspondence principle. Numerical predictions of homogenized viscoelastic properties and dynamic moduli are demonstrated.

  8. Numerical Simulation of Screech Tones from Supersonic Jets: Physics and Prediction

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Zaman, Khairul Q. (Technical Monitor)

    2002-01-01

    The objectives of this project are to: (1) perform a numerical simulation of the jet screech phenomenon; and (2) use the data of the simulations to obtain a better understanding of the physics of jet screech. The original grant period was for three years. This was extended at no cost for an extra year to allow the principal investigator time to publish the results. We would like to report that our research work and results (supported by this grant) have fulfilled both objectives of the grant. The following is a summary of the important accomplishments: (1) We have now demonstrated that it is possible to perform accurate numerical simulations of the jet screech phenomenon. Both the axisymmetric case and the fully three-dimensional case were carried out successfully. It is worthwhile to note that this is the first time the screech tone phenomenon has been successfully simulated numerically; (2) All four screech modes were reproduced in the simulation. The computed screech frequencies and intensities were in good agreement with the NASA Langley Research Center data; (3) The staging phenomenon was reproduced in the simulation; (4) The effects of nozzle lip thickness and jet temperature were studied. Simulated tone frequencies at various nozzle lip thickness and jet temperature were found to agree well with experiments; (5) The simulated data were used to explain, for the first time, why there are two axisymmetric screech modes and two helical/flapping screech modes; (6) The simulated data were used to show that when two tones are observed, they co-exist rather than switching from one mode to the other, back and forth, as some previous investigators have suggested; and (7) Some resources of the grant were used to support the development of new computational aeroacoustics (CAA) methodology. (Our screech tone simulations have benefited because of the availability of these improved methods.)

  9. Numerical method for predicting flow characteristics and performance of nonaxisymmetric nozzles, theory

    NASA Technical Reports Server (NTRS)

    Thomas, P. D.

    1979-01-01

    The theoretical foundation and formulation of a numerical method for predicting the viscous flowfield in and about isolated three dimensional nozzles of geometrically complex configuration are presented. High Reynolds number turbulent flows are of primary interest for any combination of subsonic, transonic, and supersonic flow conditions inside or outside the nozzle. An alternating-direction implicit (ADI) numerical technique is employed to integrate the unsteady Navier-Stokes equations until an asymptotic steady-state solution is reached. Boundary conditions are computed with an implicit technique compatible with the ADI technique employed at interior points of the flow region. The equations are formulated and solved in a boundary-conforming curvilinear coordinate system. The curvilinear coordinate system and computational grid is generated numerically as the solution to an elliptic boundary value problem. A method is developed that automatically adjusts the elliptic system so that the interior grid spacing is controlled directly by the a priori selection of the grid spacing on the boundaries of the flow region.

  10. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    SciTech Connect

    Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel; Cuevas, Sergio; Ramos, Eduardo

    2014-01-15

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.

  11. Increasing horizontal resolution in numerical weather prediction and climate simulations: illusion or panacea?

    PubMed

    Wedi, Nils P

    2014-06-28

    The steady path of doubling the global horizontal resolution approximately every 8 years in numerical weather prediction (NWP) at the European Centre for Medium Range Weather Forecasts may be substantially altered with emerging novel computing architectures. It coincides with the need to appropriately address and determine forecast uncertainty with increasing resolution, in particular, when convective-scale motions start to be resolved. Blunt increases in the model resolution will quickly become unaffordable and may not lead to improved NWP forecasts. Consequently, there is a need to accordingly adjust proven numerical techniques. An informed decision on the modelling strategy for harnessing exascale, massively parallel computing power thus also requires a deeper understanding of the sensitivity to uncertainty--for each part of the model--and ultimately a deeper understanding of multi-scale interactions in the atmosphere and their numerical realization in ultra-high-resolution NWP and climate simulations. This paper explores opportunities for substantial increases in the forecast efficiency by judicious adjustment of the formal accuracy or relative resolution in the spectral and physical space. One path is to reduce the formal accuracy by which the spectral transforms are computed. The other pathway explores the importance of the ratio used for the horizontal resolution in gridpoint space versus wavenumbers in spectral space. This is relevant for both high-resolution simulations as well as ensemble-based uncertainty estimation.

  12. Temperature Fields in Soft Tissue during LPUS Treatment: Numerical Prediction and Experiment Results

    SciTech Connect

    Kujawska, Tamara; Wojcik, Janusz; Nowicki, Andrzej

    2010-03-09

    Recent research has shown that beneficial therapeutic effects in soft tissues can be induced by the low power ultrasound (LPUS). For example, increasing of cells immunity to stress (among others thermal stress) can be obtained through the enhanced heat shock proteins (Hsp) expression induced by the low intensity ultrasound. The possibility to control the Hsp expression enhancement in soft tissues in vivo stimulated by ultrasound can be the potential new therapeutic approach to the neurodegenerative diseases which utilizes the known feature of cells to increase their immunity to stresses through the Hsp expression enhancement. The controlling of the Hsp expression enhancement by adjusting of exposure level to ultrasound energy would allow to evaluate and optimize the ultrasound-mediated treatment efficiency. Ultrasonic regimes are controlled by adjusting the pulsed ultrasound waves intensity, frequency, duration, duty cycle and exposure time. Our objective was to develop the numerical model capable of predicting in space and time temperature fields induced by a circular focused transducer generating tone bursts in multilayer nonlinear attenuating media and to compare the numerically calculated results with the experimental data in vitro. The acoustic pressure field in multilayer biological media was calculated using our original numerical solver. For prediction of temperature fields the Pennes' bio-heat transfer equation was employed. Temperature field measurements in vitro were carried out in a fresh rat liver using the 15 mm diameter, 25 mm focal length and 2 MHz central frequency transducer generating tone bursts with the spatial peak temporal average acoustic intensity varied between 0.325 and 1.95 W/cm{sup 2}, duration varied from 20 to 500 cycles at the same 20% duty cycle and the exposure time varied up to 20 minutes. The measurement data were compared with numerical simulation results obtained under experimental boundary conditions. Good agreement between

  13. Temperature Fields in Soft Tissue during LPUS Treatment: Numerical Prediction and Experiment Results

    NASA Astrophysics Data System (ADS)

    Kujawska, Tamara; Wójcik, Janusz; Nowicki, Andrzej

    2010-03-01

    Recent research has shown that beneficial therapeutic effects in soft tissues can be induced by the low power ultrasound (LPUS). For example, increasing of cells immunity to stress (among others thermal stress) can be obtained through the enhanced heat shock proteins (Hsp) expression induced by the low intensity ultrasound. The possibility to control the Hsp expression enhancement in soft tissues in vivo stimulated by ultrasound can be the potential new therapeutic approach to the neurodegenerative diseases which utilizes the known feature of cells to increase their immunity to stresses through the Hsp expression enhancement. The controlling of the Hsp expression enhancement by adjusting of exposure level to ultrasound energy would allow to evaluate and optimize the ultrasound-mediated treatment efficiency. Ultrasonic regimes are controlled by adjusting the pulsed ultrasound waves intensity, frequency, duration, duty cycle and exposure time. Our objective was to develop the numerical model capable of predicting in space and time temperature fields induced by a circular focused transducer generating tone bursts in multilayer nonlinear attenuating media and to compare the numerically calculated results with the experimental data in vitro. The acoustic pressure field in multilayer biological media was calculated using our original numerical solver. For prediction of temperature fields the Pennes' bio-heat transfer equation was employed. Temperature field measurements in vitro were carried out in a fresh rat liver using the 15 mm diameter, 25 mm focal length and 2 MHz central frequency transducer generating tone bursts with the spatial peak temporal average acoustic intensity varied between 0.325 and 1.95 W/cm2, duration varied from 20 to 500 cycles at the same 20% duty cycle and the exposure time varied up to 20 minutes. The measurement data were compared with numerical simulation results obtained under experimental boundary conditions. Good agreement between the

  14. A Hemodynamic Predict of an Intra-Aorta Pump Application in Vitro Using Numerical Analysis

    NASA Astrophysics Data System (ADS)

    Gao, Bin; Chen, Ningning; Chang, Yu

    The Intra-Aorta Pump is a novel LVAD assisting the native heart without percutaneous drive-lines. The Intra-Aorta Pump is emplaced between the radix aortae and the aortic arch to draw-off the blood from the left ventricle to the aorta. To predict the change of pressure drop and blood flow along with the change of pump speed, a nonlinear model has been made based on the structure and speed of the Intra-Aorta Pump. To do this, a nonlinear electric circuit for the Intra-Aorta Pump has been developed. The model includes two speed dependent current sources and flow dependent resistant to simulate the relationship between the pressure drop of the Intra-Aorta Pump and the flow through the pump along with the change of pump speed. The pressure drop and blood flow is derived by solving differential equations with variable coefficients. The parameters of the model are determined by experiment, and the results of the experiment show that these parameters change along with the change of the pump speed distinctness. The accuracy of the model is tested experimentally on a test loop. The comparison of the prediction data derived from the model with the experimental data shows that the error is lest than 15%. The experimental results showed that the model can predict the change of pressure drop and blood flow accurately.

  15. Numerical prediction of heat affected layer in the EDM of aeronautical alloys

    NASA Astrophysics Data System (ADS)

    Izquierdo, B.; Plaza, S.; Sánchez, J. A.; Pombo, I.; Ortega, N.

    2012-10-01

    Electrical discharge machining is a popular non-traditional machining process, optimum for accurate machining of complex geometries in hard materials. EDM has been used for decades for machining pieces for the aeronautical industry, but surface integrity, and consequently the reliability of the machined parts have been questioned for long time due to the thermal nature of this machining process. In recent years, efforts have been put on modeling of the EDM process, being thermal modeling of the process one promising alternative. In a previous publication an original model of the EDM process was presented and it was used to predict material removal rate and surface finish for the EDM of steel. In the present article the capability of that modeling tool to characterize discharge properties and to predict recast layer distribution when EDMing an aeronautical alloy will be analyzed. EDM process of Inconel 718 has been studied and discharge properties have been obtained for four different EDM regimes. The capability of the model to reflect the behavior of more energetic regimes is discussed. Gathered information has been used to simulate the evolution of the recast layer generation process. Obtained results have been validated comparing them with experimental measurements, revealing a good correlation between predictions and experimental data. Finally, energetic efficiency of the discharge process has been simulated for the adjusted EDM regimes.

  16. An evolutionary model-based algorithm for accurate phylogenetic breakpoint mapping and subtype prediction in HIV-1.

    PubMed

    Kosakovsky Pond, Sergei L; Posada, David; Stawiski, Eric; Chappey, Colombe; Poon, Art F Y; Hughes, Gareth; Fearnhill, Esther; Gravenor, Mike B; Leigh Brown, Andrew J; Frost, Simon D W

    2009-11-01

    Genetically diverse pathogens (such as Human Immunodeficiency virus type 1, HIV-1) are frequently stratified into phylogenetically or immunologically defined subtypes for classification purposes. Computational identification of such subtypes is helpful in surveillance, epidemiological analysis and detection of novel variants, e.g., circulating recombinant forms in HIV-1. A number of conceptually and technically different techniques have been proposed for determining the subtype of a query sequence, but there is not a universally optimal approach. We present a model-based phylogenetic method for automatically subtyping an HIV-1 (or other viral or bacterial) sequence, mapping the location of breakpoints and assigning parental sequences in recombinant strains as well as computing confidence levels for the inferred quantities. Our Subtype Classification Using Evolutionary ALgorithms (SCUEAL) procedure is shown to perform very well in a variety of simulation scenarios, runs in parallel when multiple sequences are being screened, and matches or exceeds the performance of existing approaches on typical empirical cases. We applied SCUEAL to all available polymerase (pol) sequences from two large databases, the Stanford Drug Resistance database and the UK HIV Drug Resistance Database. Comparing with subtypes which had previously been assigned revealed that a minor but substantial (approximately 5%) fraction of pure subtype sequences may in fact be within- or inter-subtype recombinants. A free implementation of SCUEAL is provided as a module for the HyPhy package and the Datamonkey web server. Our method is especially useful when an accurate automatic classification of an unknown strain is desired, and is positioned to complement and extend faster but less accurate methods. Given the increasingly frequent use of HIV subtype information in studies focusing on the effect of subtype on treatment, clinical outcome, pathogenicity and vaccine design, the importance of accurate

  17. Conformations of 1,2-dimethoxypropane and 5-methoxy-1,3-dioxane: are ab initio quantum chemistry predictions accurate?

    NASA Astrophysics Data System (ADS)

    Smith, Grant D.; Jaffe, Richard L.; Yoon, Do. Y.

    1998-06-01

    High-level ab initio quantum chemistry calculations are shown to predict conformer populations of 1,2-dimethoxypropane and 5-methoxy-1,3-dioxane that are consistent with gas-phase NMR vicinal coupling constant measurements. The conformational energies of the cyclic ether 5-methoxy-1,3-dioxane are found to be consistent with those predicted by a rotational isomeric state (RIS) model based upon the acyclic analog 1,2-dimethoxypropane. The quantum chemistry and RIS calculations indicate the presence of strong attractive 1,5 C(H 3)⋯O electrostatic interactions in these molecules, similar to those found in 1,2-dimethoxyethane.

  18. A Case Study of the Impact of AIRS Temperature Retrievals on Numerical Weather Prediction

    NASA Technical Reports Server (NTRS)

    Reale, O.; Atlas, R.; Jusem, J. C.

    2004-01-01

    Large errors in numerical weather prediction are often associated with explosive cyclogenesis. Most studes focus on the under-forecasting error, i.e. cases of rapidly developing cyclones which are poorly predicted in numerical models. However, the over-forecasting error (i.e., to predict an explosively developing cyclone which does not occur in reality) is a very common error that severely impacts the forecasting skill of all models and may also present economic costs if associated with operational forecasting. Unnecessary precautions taken by marine activities can result in severe economic loss. Moreover, frequent occurrence of over-forecasting can undermine the reliance on operational weather forecasting. Therefore, it is important to understand and reduce the prdctions of extreme weather associated with explosive cyclones which do not actually develop. In this study we choose a very prominent case of over-forecasting error in the northwestern Pacific. A 960 hPa cyclone develops in less than 24 hour in the 5-day forecast, with a deepening rate of about 30 hPa in one day. The cyclone is not versed in the analyses and is thus a case of severe over-forecasting. By assimilating AIRS data, the error is largely eliminated. By following the propagation of the anomaly that generates the spurious cyclone, it is found that a small mid-tropospheric geopotential height negative anomaly over the northern part of the Indian subcontinent in the initial conditions, propagates westward, is amplified by orography, and generates a very intense jet streak in the subtropical jet stream, with consequent explosive cyclogenesis over the Pacific. The AIRS assimilation eliminates this anomaly that may have been caused by erroneous upper-air data, and represents the jet stream more correctly. The energy associated with the jet is distributed over a much broader area and as a consequence a multiple, but much more moderate cyclogenesis is observed.

  19. A PBL-radiation model for application to regional numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Chang, Chia-Bo

    1989-01-01

    Often in the short-range limited-area numerical weather prediction (NWP) of extratropical weather systems the effects of planetary boundary layer (PBL) processes are considered secondarily important. However, it may not be the case for the regional NWP of mesoscale convective systems over the arid and semi-arid highlands of the southwestern and south-central United States in late spring and summer. Over these dry regions, the PBL can grow quite high up into the lower middle troposphere (600 mb) due to very effective solar heating and hence a vigorous air-land thermal interaction can occur. The interaction representing a major heat source for regional dynamical systems can not be ignored. A one-dimensional PBL-radiation model was developed. The model PBL consists of a constant-flux surface layer superposed with a well-mixed (Ekman) layer. The vertical eddy mixing coefficients for heat and momentum in the surface layer are determined according to the surface similarity theory, while their vertical profiles in the Ekman layer are specified with a cubic polynomial. Prognostic equations are used for predicting the height of the nonneutral PBL. The atmospheric radiation is parameterized to define the surface heat source/sink for the growth and decay of the PBL. A series of real-data numerical experiments has been carried out to obtain a physical understanding how the model performs under various atmospheric and surface conditions. This one-dimensional model will eventually be incorporated into a mesoscale prediction system. The ultimate goal of this research is to improve the NWP of mesoscale convective storms over land.

  20. A Maximal Graded Exercise Test to Accurately Predict VO2max in 18-65-Year-Old Adults

    ERIC Educational Resources Information Center

    George, James D.; Bradshaw, Danielle I.; Hyde, Annette; Vehrs, Pat R.; Hager, Ronald L.; Yanowitz, Frank G.

    2007-01-01

    The purpose of this study was to develop an age-generalized regression model to predict maximal oxygen uptake (VO sub 2 max) based on a maximal treadmill graded exercise test (GXT; George, 1996). Participants (N = 100), ages 18-65 years, reached a maximal level of exertion (mean plus or minus standard deviation [SD]; maximal heart rate [HR sub…

  1. Survival outcomes scores (SOFT, BAR, and Pedi-SOFT) are accurate in predicting post-liver transplant survival in adolescents.

    PubMed

    Conjeevaram Selvakumar, Praveen Kumar; Maksimak, Brian; Hanouneh, Ibrahim; Youssef, Dalia H; Lopez, Rocio; Alkhouri, Naim

    2016-09-01

    SOFT and BAR scores utilize recipient, donor, and graft factors to predict the 3-month survival after LT in adults (≥18 years). Recently, Pedi-SOFT score was developed to predict 3-month survival after LT in young children (≤12 years). These scoring systems have not been studied in adolescent patients (13-17 years). We evaluated the accuracy of these scoring systems in predicting the 3-month post-LT survival in adolescents through a retrospective analysis of data from UNOS of patients aged 13-17 years who received LT between 03/01/2002 and 12/31/2012. Recipients of combined organ transplants, donation after cardiac death, or living donor graft were excluded. A total of 711 adolescent LT recipients were included with a mean age of 15.2±1.4 years. A total of 100 patients died post-LT including 33 within 3 months. SOFT, BAR, and Pedi-SOFT scores were all found to be good predictors of 3-month post-transplant survival outcome with areas under the ROC curve of 0.81, 0.80, and 0.81, respectively. All three scores provided good accuracy for predicting 3-month survival post-LT in adolescents and may help clinical decision making to optimize survival rate and organ utilization. PMID:27478012

  2. Is demography destiny? Application of machine learning techniques to accurately predict population health outcomes from a minimal demographic dataset.

    PubMed

    Luo, Wei; Nguyen, Thin; Nichols, Melanie; Tran, Truyen; Rana, Santu; Gupta, Sunil; Phung, Dinh; Venkatesh, Svetha; Allender, Steve

    2015-01-01

    For years, we have relied on population surveys to keep track of regional public health statistics, including the prevalence of non-communicable diseases. Because of the cost and limitations of such surveys, we often do not have the up-to-date data on health outcomes of a region. In this paper, we examined the feasibility of inferring regional health outcomes from socio-demographic data that are widely available and timely updated through national censuses and community surveys. Using data for 50 American states (excluding Washington DC) from 2007 to 2012, we constructed a machine-learning model to predict the prevalence of six non-communicable disease (NCD) outcomes (four NCDs and two major clinical risk factors), based on population socio-demographic characteristics from the American Community Survey. We found that regional prevalence estimates for non-communicable diseases can be reasonably predicted. The predictions were highly correlated with the observed data, in both the states included in the derivation model (median correlation 0.88) and those excluded from the development for use as a completely separated validation sample (median correlation 0.85), demonstrating that the model had sufficient external validity to make good predictions, based on demographics alone, for areas not included in the model development. This highlights both the utility of this sophisticated approach to model development, and the vital importance of simple socio-demographic characteristics as both indicators and determinants of chronic disease.

  3. Genomic Models of Short-Term Exposure Accurately Predict Long-Term Chemical Carcinogenicity and Identify Putative Mechanisms of Action

    PubMed Central

    Gusenleitner, Daniel; Auerbach, Scott S.; Melia, Tisha; Gómez, Harold F.; Sherr, David H.; Monti, Stefano

    2014-01-01

    Background Despite an overall decrease in incidence of and mortality from cancer, about 40% of Americans will be diagnosed with the disease in their lifetime, and around 20% will die of it. Current approaches to test carcinogenic chemicals adopt the 2-year rodent bioassay, which is costly and time-consuming. As a result, fewer than 2% of the chemicals on the market have actually been tested. However, evidence accumulated to date suggests that gene expression profiles from model organisms exposed to chemical compounds reflect underlying mechanisms of action, and that these toxicogenomic models could be used in the prediction of chemical carcinogenicity. Results In this study, we used a rat-based microarray dataset from the NTP DrugMatrix Database to test the ability of toxicogenomics to model carcinogenicity. We analyzed 1,221 gene-expression profiles obtained from rats treated with 127 well-characterized compounds, including genotoxic and non-genotoxic carcinogens. We built a classifier that predicts a chemical's carcinogenic potential with an AUC of 0.78, and validated it on an independent dataset from the Japanese Toxicogenomics Project consisting of 2,065 profiles from 72 compounds. Finally, we identified differentially expressed genes associated with chemical carcinogenesis, and developed novel data-driven approaches for the molecular characterization of the response to chemical stressors. Conclusion Here, we validate a toxicogenomic approach to predict carcinogenicity and provide strong evidence that, with a larger set of compounds, we should be able to improve the sensitivity and specificity of the predictions. We found that the prediction of carcinogenicity is tissue-dependent and that the results also confirm and expand upon previous studies implicating DNA damage, the peroxisome proliferator-activated receptor, the aryl hydrocarbon receptor, and regenerative pathology in the response to carcinogen exposure. PMID:25058030

  4. Length of sick leave – Why not ask the sick-listed? Sick-listed individuals predict their length of sick leave more accurately than professionals

    PubMed Central

    Fleten, Nils; Johnsen, Roar; Førde, Olav Helge

    2004-01-01

    Background The knowledge of factors accurately predicting the long lasting sick leaves is sparse, but information on medical condition is believed to be necessary to identify persons at risk. Based on the current practice, with identifying sick-listed individuals at risk of long-lasting sick leaves, the objectives of this study were to inquire the diagnostic accuracy of length of sick leaves predicted in the Norwegian National Insurance Offices, and to compare their predictions with the self-predictions of the sick-listed. Methods Based on medical certificates, two National Insurance medical consultants and two National Insurance officers predicted, at day 14, the length of sick leave in 993 consecutive cases of sick leave, resulting from musculoskeletal or mental disorders, in this 1-year follow-up study. Two months later they reassessed 322 cases based on extended medical certificates. Self-predictions were obtained in 152 sick-listed subjects when their sick leave passed 14 days. Diagnostic accuracy of the predictions was analysed by ROC area, sensitivity, specificity, likelihood ratio, and positive predictive value was included in the analyses of predictive validity. Results The sick-listed identified sick leave lasting 12 weeks or longer with an ROC area of 80.9% (95% CI 73.7–86.8), while the corresponding estimates for medical consultants and officers had ROC areas of 55.6% (95% CI 45.6–65.6%) and 56.0% (95% CI 46.6–65.4%), respectively. The predictions of sick-listed males were significantly better than those of female subjects, and older subjects predicted somewhat better than younger subjects. Neither formal medical competence, nor additional medical information, noticeably improved the diagnostic accuracy based on medical certificates. Conclusion This study demonstrates that the accuracy of a prognosis based on medical documentation in sickness absence forms, is lower than that of one based on direct communication with the sick-listed themselves

  5. Numerical prediction of transition of the F-16 wing at supersonic speeds

    NASA Technical Reports Server (NTRS)

    Cummings, Russell M.; Garcia, Joseph A.

    1993-01-01

    A parametric study is being conducted as an effort to numerically predict the extent of natural laminar flow (NLF) on finite swept wings at supersonic speeds. This study is one aspect of a High Speed Research Program (HSRP) to gain an understanding of the technical requirements for high-speed aircraft flight. The parameters that are being addressed in this study are Reynolds number, angle of attack, and leading-edge wing sweep. These parameters were analyzed through the use of an advanced Computational Fluid Dynamics (CFD) flow solver, specifically the ARC 3-D Compressible Navier-Stokes (CNS) flow solver. From the CNS code, pressure coefficients (Cp) are obtained for the various cases. These Cp's are then used to compute the boundary-layer profiles through the use of the 'Kaups and Cebeci' compressible 2-D boundary layer code. Finally, the boundary-layer parameters are processed into a 3-D compressible boundary layer stability code (COSAL) to predict transition. The parametric study then consisted of four geometries which addressed the effects of sweep, and three angles of attack from zero to ten degrees to yield a total of 12 cases. The above process was substantially automated through a procedure that was developed by the work conducted under this study. This automation procedure then yields a 3-D graphical measure of the extent of laminar flow by predicting the transition location of laminar to turbulent flow.

  6. Using High Resolution Numerical Weather Prediction Models to Reduce and Estimate Uncertainty in Flood Forecasting

    NASA Astrophysics Data System (ADS)

    Cole, S. J.; Moore, R. J.; Roberts, N.

    2007-12-01

    Forecast rainfall from Numerical Weather Prediction (NWP) and/or nowcasting systems is a major source of uncertainty for short-term flood forecasting. One approach for reducing and estimating this uncertainty is to use high resolution NWP models that should provide better rainfall predictions. The potential benefit of running the Met Office Unified Model (UM) with a grid spacing of 4 and 1 km compared to the current operational resolution of 12 km is assessed using the January 2005 Carlisle flood in northwest England. These NWP rainfall forecasts, and forecasts from the Nimrod nowcasting system, were fed into the lumped Probability Distributed Model (PDM) and the distributed Grid-to-Grid model to predict river flow at the outlets of two catchments important for flood warning. The results show the benefit of increased resolution in the UM, the benefit of coupling the high- resolution rainfall forecasts to hydrological models and the improvement in timeliness of flood warning that might have been possible. Ongoing work aims to employ these NWP rainfall forecasts in ensemble form as part of a procedure for estimating the uncertainty of flood forecasts.

  7. Predicting playing frequencies for clarinets: A comparison between numerical simulations and simplified analytical formulas.

    PubMed

    Coyle, Whitney L; Guillemain, Philippe; Kergomard, Jean; Dalmont, Jean-Pierre

    2015-11-01

    When designing a wind instrument such as a clarinet, it can be useful to be able to predict the playing frequencies. This paper presents an analytical method to deduce these playing frequencies using the input impedance curve. Specifically there are two control parameters that have a significant influence on the playing frequency, the blowing pressure and reed opening. Four effects are known to alter the playing frequency and are examined separately: the flow rate due to the reed motion, the reed dynamics, the inharmonicity of the resonator, and the temperature gradient within the clarinet. The resulting playing frequencies for the first register of a particular professional level clarinet are found using the analytical formulas presented in this paper. The analytical predictions are then compared to numerically simulated results to validate the prediction accuracy. The main conclusion is that in general the playing frequency decreases above the oscillation threshold because of inharmonicity, then increases above the beating reed regime threshold because of the decrease of the flow rate effect.

  8. Development of a 3D numerical methodology for fast prediction of gun blast induced loading

    NASA Astrophysics Data System (ADS)

    Costa, E.; Lagasco, F.

    2014-05-01

    In this paper, the development of a methodology based on semi-empirical models from the literature to carry out 3D prediction of pressure loading on surfaces adjacent to a weapon system during firing is presented. This loading is consequent to the impact of the blast wave generated by the projectile exiting the muzzle bore. When exceeding a pressure threshold level, loading is potentially capable to induce unwanted damage to nearby hard structures as well as frangible panels or electronic equipment. The implemented model shows the ability to quickly predict the distribution of the blast wave parameters over three-dimensional complex geometry surfaces when the weapon design and emplacement data as well as propellant and projectile characteristics are available. Considering these capabilities, the use of the proposed methodology is envisaged as desirable in the preliminary design phase of the combat system to predict adverse effects and then enable to identify the most appropriate countermeasures. By providing a preliminary but sensitive estimate of the operative environmental loading, this numerical means represents a good alternative to more powerful, but time consuming advanced computational fluid dynamics tools, which use can, thus, be limited to the final phase of the design.

  9. Predicting playing frequencies for clarinets: A comparison between numerical simulations and simplified analytical formulas.

    PubMed

    Coyle, Whitney L; Guillemain, Philippe; Kergomard, Jean; Dalmont, Jean-Pierre

    2015-11-01

    When designing a wind instrument such as a clarinet, it can be useful to be able to predict the playing frequencies. This paper presents an analytical method to deduce these playing frequencies using the input impedance curve. Specifically there are two control parameters that have a significant influence on the playing frequency, the blowing pressure and reed opening. Four effects are known to alter the playing frequency and are examined separately: the flow rate due to the reed motion, the reed dynamics, the inharmonicity of the resonator, and the temperature gradient within the clarinet. The resulting playing frequencies for the first register of a particular professional level clarinet are found using the analytical formulas presented in this paper. The analytical predictions are then compared to numerically simulated results to validate the prediction accuracy. The main conclusion is that in general the playing frequency decreases above the oscillation threshold because of inharmonicity, then increases above the beating reed regime threshold because of the decrease of the flow rate effect. PMID:26627753

  10. Accurate and efficient prediction of fine-resolution hydrologic and carbon dynamic simulations from coarse-resolution models

    NASA Astrophysics Data System (ADS)

    Pau, George Shu Heng; Shen, Chaopeng; Riley, William J.; Liu, Yaning

    2016-02-01

    The topography, and the biotic and abiotic parameters are typically upscaled to make watershed-scale hydrologic-biogeochemical models computationally tractable. However, upscaling procedure can produce biases when nonlinear interactions between different processes are not fully captured at coarse resolutions. Here we applied the Proper Orthogonal Decomposition Mapping Method (PODMM) to downscale the field solutions from a coarse (7 km) resolution grid to a fine (220 m) resolution grid. PODMM trains a reduced-order model (ROM) with coarse-resolution and fine-resolution solutions, here obtained using PAWS+CLM, a quasi-3-D watershed processes model that has been validated for many temperate watersheds. Subsequent fine-resolution solutions were approximated based only on coarse-resolution solutions and the ROM. The approximation errors were efficiently quantified using an error estimator. By jointly estimating correlated variables and temporally varying the ROM parameters, we further reduced the approximation errors by up to 20%. We also improved the method's robustness by constructing multiple ROMs using different set of variables, and selecting the best approximation based on the error estimator. The ROMs produced accurate downscaling of soil moisture, latent heat flux, and net primary production with O(1000) reduction in computational cost. The subgrid distributions were also nearly indistinguishable from the ones obtained using the fine-resolution model. Compared to coarse-resolution solutions, biases in upscaled ROM solutions were reduced by up to 80%. This method has the potential to help address the long-standing spatial scaling problem in hydrology and enable long-time integration, parameter estimation, and stochastic uncertainty analysis while accurately representing the heterogeneities.

  11. Prognostic models and risk scores: can we accurately predict postoperative nausea and vomiting in children after craniotomy?

    PubMed

    Neufeld, Susan M; Newburn-Cook, Christine V; Drummond, Jane E

    2008-10-01

    Postoperative nausea and vomiting (PONV) is a problem for many children after craniotomy. Prognostic models and risk scores help identify who is at risk for an adverse event such as PONV to help guide clinical care. The purpose of this article is to assess whether an existing prognostic model or risk score can predict PONV in children after craniotomy. The concepts of transportability, calibration, and discrimination are presented to identify what is required to have a valid tool for clinical use. Although previous work may inform clinical practice and guide future research, existing prognostic models and risk scores do not appear to be options for predicting PONV in children undergoing craniotomy. However, until risk factors are further delineated, followed by the development and validation of prognostic models and risk scores that include children after craniotomy, clinical judgment in the context of current research may serve as a guide for clinical care in this population. PMID:18939320

  12. An Optimized Method for Accurate Fetal Sex Prediction and Sex Chromosome Aneuploidy Detection in Non-Invasive Prenatal Testing.

    PubMed

    Wang, Ting; He, Quanze; Li, Haibo; Ding, Jie; Wen, Ping; Zhang, Qin; Xiang, Jingjing; Li, Qiong; Xuan, Liming; Kong, Lingyin; Mao, Yan; Zhu, Yijun; Shen, Jingjing; Liang, Bo; Li, Hong

    2016-01-01

    Massively parallel sequencing (MPS) combined with bioinformatic analysis has been widely applied to detect fetal chromosomal aneuploidies such as trisomy 21, 18, 13 and sex chromosome aneuploidies (SCAs) by sequencing cell-free fetal DNA (cffDNA) from maternal plasma, so-called non-invasive prenatal testing (NIPT). However, many technical challenges, such as dependency on correct fetal sex prediction, large variations of chromosome Y measurement and high sensitivity to random reads mapping, may result in higher false negative rate (FNR) and false positive rate (FPR) in fetal sex prediction as well as in SCAs detection. Here, we developed an optimized method to improve the accuracy of the current method by filtering out randomly mapped reads in six specific regions of the Y chromosome. The method reduces the FNR and FPR of fetal sex prediction from nearly 1% to 0.01% and 0.06%, respectively and works robustly under conditions of low fetal DNA concentration (1%) in testing and simulation of 92 samples. The optimized method was further confirmed by large scale testing (1590 samples), suggesting that it is reliable and robust enough for clinical testing.

  13. Coronary Computed Tomographic Angiography Does Not Accurately Predict the Need of Coronary Revascularization in Patients with Stable Angina

    PubMed Central

    Hong, Sung-Jin; Her, Ae-Young; Suh, Yongsung; Won, Hoyoun; Cho, Deok-Kyu; Cho, Yun-Hyeong; Yoon, Young-Won; Lee, Kyounghoon; Kang, Woong Chol; Kim, Yong Hoon; Kim, Sang-Wook; Shin, Dong-Ho; Kim, Jung-Sun; Kim, Byeong-Keuk; Ko, Young-Guk; Choi, Byoung-Wook; Choi, Donghoon; Jang, Yangsoo

    2016-01-01

    Purpose To evaluate the ability of coronary computed tomographic angiography (CCTA) to predict the need of coronary revascularization in symptomatic patients with stable angina who were referred to a cardiac catheterization laboratory for coronary revascularization. Materials and Methods Pre-angiography CCTA findings were analyzed in 1846 consecutive symptomatic patients with stable angina, who were referred to a cardiac catheterization laboratory at six hospitals and were potential candidates for coronary revascularization between July 2011 and December 2013. The number of patients requiring revascularization was determined based on the severity of coronary stenosis as assessed by CCTA. This was compared to the actual number of revascularization procedures performed in the cardiac catheterization laboratory. Results Based on CCTA findings, coronary revascularization was indicated in 877 (48%) and not indicated in 969 (52%) patients. Of the 877 patients indicated for revascularization by CCTA, only 600 (68%) underwent the procedure, whereas 285 (29%) of the 969 patients not indicated for revascularization, as assessed by CCTA, underwent the procedure. When the coronary arteries were divided into 15 segments using the American Heart Association coronary tree model, the sensitivity, specificity, positive predictive value, and negative predictive value of CCTA for therapeutic decision making on a per-segment analysis were 42%, 96%, 40%, and 96%, respectively. Conclusion CCTA-based assessment of coronary stenosis severity does not sufficiently differentiate between coronary segments requiring revascularization versus those not requiring revascularization. Conventional coronary angiography should be considered to determine the need of revascularization in symptomatic patients with stable angina. PMID:27401637

  14. An Optimized Method for Accurate Fetal Sex Prediction and Sex Chromosome Aneuploidy Detection in Non-Invasive Prenatal Testing.

    PubMed

    Wang, Ting; He, Quanze; Li, Haibo; Ding, Jie; Wen, Ping; Zhang, Qin; Xiang, Jingjing; Li, Qiong; Xuan, Liming; Kong, Lingyin; Mao, Yan; Zhu, Yijun; Shen, Jingjing; Liang, Bo; Li, Hong

    2016-01-01

    Massively parallel sequencing (MPS) combined with bioinformatic analysis has been widely applied to detect fetal chromosomal aneuploidies such as trisomy 21, 18, 13 and sex chromosome aneuploidies (SCAs) by sequencing cell-free fetal DNA (cffDNA) from maternal plasma, so-called non-invasive prenatal testing (NIPT). However, many technical challenges, such as dependency on correct fetal sex prediction, large variations of chromosome Y measurement and high sensitivity to random reads mapping, may result in higher false negative rate (FNR) and false positive rate (FPR) in fetal sex prediction as well as in SCAs detection. Here, we developed an optimized method to improve the accuracy of the current method by filtering out randomly mapped reads in six specific regions of the Y chromosome. The method reduces the FNR and FPR of fetal sex prediction from nearly 1% to 0.01% and 0.06%, respectively and works robustly under conditions of low fetal DNA concentration (1%) in testing and simulation of 92 samples. The optimized method was further confirmed by large scale testing (1590 samples), suggesting that it is reliable and robust enough for clinical testing. PMID:27441628

  15. An Optimized Method for Accurate Fetal Sex Prediction and Sex Chromosome Aneuploidy Detection in Non-Invasive Prenatal Testing

    PubMed Central

    Li, Haibo; Ding, Jie; Wen, Ping; Zhang, Qin; Xiang, Jingjing; Li, Qiong; Xuan, Liming; Kong, Lingyin; Mao, Yan; Zhu, Yijun; Shen, Jingjing; Liang, Bo; Li, Hong

    2016-01-01

    Massively parallel sequencing (MPS) combined with bioinformatic analysis has been widely applied to detect fetal chromosomal aneuploidies such as trisomy 21, 18, 13 and sex chromosome aneuploidies (SCAs) by sequencing cell-free fetal DNA (cffDNA) from maternal plasma, so-called non-invasive prenatal testing (NIPT). However, many technical challenges, such as dependency on correct fetal sex prediction, large variations of chromosome Y measurement and high sensitivity to random reads mapping, may result in higher false negative rate (FNR) and false positive rate (FPR) in fetal sex prediction as well as in SCAs detection. Here, we developed an optimized method to improve the accuracy of the current method by filtering out randomly mapped reads in six specific regions of the Y chromosome. The method reduces the FNR and FPR of fetal sex prediction from nearly 1% to 0.01% and 0.06%, respectively and works robustly under conditions of low fetal DNA concentration (1%) in testing and simulation of 92 samples. The optimized method was further confirmed by large scale testing (1590 samples), suggesting that it is reliable and robust enough for clinical testing. PMID:27441628

  16. A highly accurate protein structural class prediction approach using auto cross covariance transformation and recursive feature elimination.

    PubMed

    Li, Xiaowei; Liu, Taigang; Tao, Peiying; Wang, Chunhua; Chen, Lanming

    2015-12-01

    Structural class characterizes the overall folding type of a protein or its domain. Many methods have been proposed to improve the prediction accuracy of protein structural class in recent years, but it is still a challenge for the low-similarity sequences. In this study, we introduce a feature extraction technique based on auto cross covariance (ACC) transformation of position-specific score matrix (PSSM) to represent a protein sequence. Then support vector machine-recursive feature elimination (SVM-RFE) is adopted to select top K features according to their importance and these features are input to a support vector machine (SVM) to conduct the prediction. Performance evaluation of the proposed method is performed using the jackknife test on three low-similarity datasets, i.e., D640, 1189 and 25PDB. By means of this method, the overall accuracies of 97.2%, 96.2%, and 93.3% are achieved on these three datasets, which are higher than those of most existing methods. This suggests that the proposed method could serve as a very cost-effective tool for predicting protein structural class especially for low-similarity datasets.

  17. Cloud-Based Numerical Weather Prediction for Near Real-Time Forecasting and Disaster Response

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew; Case, Jonathan; Venners, Jason; Schroeder, Richard; Checchi, Milton; Zavodsky, Bradley; Limaye, Ashutosh; O'Brien, Raymond

    2015-01-01

    activities in environmental monitoring and prediction across a growing number of regional hubs throughout the world. Capacity-building applications that extend numerical weather prediction to developing countries are intended to provide near real-time applications to benefit public health, safety, and economic interests, but may have a greater impact during disaster events by providing a source for local predictions of weather-related hazards, or impacts that local weather events may have during the recovery phase.

  18. Numerical Weather Prediction Models on Linux Boxes as tools in meteorological education in Hungary

    NASA Astrophysics Data System (ADS)

    Gyongyosi, A. Z.; Andre, K.; Salavec, P.; Horanyi, A.; Szepszo, G.; Mille, M.; Tasnadi, P.; Weidiger, T.

    2012-04-01

    Education of Meteorologist in Hungary - according to the Bologna Process - has three stages: BSc, MSc and PhD, and students graduating at each stage get the respective degree (BSc, MSc and PhD). The three year long base BSc course in Meteorology can be chosen by undergraduate students in the fields of Geosciences, Environmental Sciences and Physics. BasicsFundamentals in Mathematics (Calculus), Physics (General and Theoretical) Physics and Informatics are emphasized during their elementary education. The two year long MSc course - in which about 15 to 25 students are admitted each year - can be studied only at our the Eötvös Loránd uUniversity in the our country. Our aim is to give a basic education in all fields of Meteorology. Main topics are: Climatology, Atmospheric Physics, Atmospheric Chemistry, Dynamic and Synoptic Meteorology, Numerical Weather Prediction, modeling Modeling of surfaceSurface-atmosphere Iinteractions and Cclimate change. Education is performed in two branches: Climate Researcher and Forecaster. Education of Meteorologist in Hungary - according to the Bologna Process - has three stages: BSc, MSc and PhD, and students graduating at each stage get the respective degree. The three year long BSc course in Meteorology can be chosen by undergraduate students in the fields of Geosciences, Environmental Sciences and Physics. Fundamentals in Mathematics (Calculus), (General and Theoretical) Physics and Informatics are emphasized during their elementary education. The two year long MSc course - in which about 15 to 25 students are admitted each year - can be studied only at the Eötvös Loránd University in our country. Our aim is to give a basic education in all fields of Meteorology: Climatology, Atmospheric Physics, Atmospheric Chemistry, Dynamic and Synoptic Meteorology, Numerical Weather Prediction, Modeling of Surface-atmosphere Interactions and Climate change. Education is performed in two branches: Climate Researcher and Forecaster

  19. Numerical investigation of temperature distribution in an eroded bend pipe and prediction of erosion reduced thickness.

    PubMed

    Zhu, Hongjun; Feng, Guang; Wang, Qijun

    2014-01-01

    Accurate prediction of erosion thickness is essential for pipe engineering. The objective of the present paper is to study the temperature distribution in an eroded bend pipe and find a new method to predict the erosion reduced thickness. Computational fluid dynamic (CFD) simulations with FLUENT software are carried out to investigate the temperature field. And effects of oil inlet rate, oil inlet temperature, and erosion reduced thickness are examined. The presence of erosion pit brings about the obvious fluctuation of temperature drop along the extrados of bend. And the minimum temperature drop presents at the most severe erosion point. Small inlet temperature or large inlet velocity can lead to small temperature drop, while shallow erosion pit causes great temperature drop. The dimensionless minimum temperature drop is analyzed and the fitting formula is obtained. Using the formula we can calculate the erosion reduced thickness, which is only needed to monitor the outer surface temperature of bend pipe. This new method can provide useful guidance for pipeline monitoring and replacement.

  20. Experimental and numerical life prediction of thermally cycled thermal barrier coatings

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Persson, C.; Wigren, J.

    2004-09-01

    This article addresses the predominant degradation modes and life prediction of a plasma-sprayed thermal barrier coating (TBC). The studied TBC system consists of an air-plasma-sprayed bond coat and an air-plasma-sprayed, yttria partially stabilized zirconia top layer on a conventional Hastelloy X substrate. Thermal shock tests of as-sprayed TBC and pre-oxidized TBC specimens were conducted under different burner flame conditions at Volvo Aero Corporation (Trollhättan, Sweden). Finite element models were used to simulate the thermal shock tests. Transient temperature distributions and thermal mismatch stresses in different layers of the coatings during thermal cycling were calculated. The roughness of the interface between the ceramic top coat and the bond coat was modeled through an ideally sinusoidal wavy surface. Bond coat oxidation was simulated through adding an aluminum oxide layer between the ceramic top coat and the bond coat. The calculated stresses indicated that interfacial delamination cracks, initiated in the ceramic top coat at the peak of the asperity of the interface, together with surface cracking, are the main reasons for coating failure. A phenomenological life prediction model for the coating was proposed. This model is accurate within a factor of 3.

  1. Numerical Investigation of Temperature Distribution in an Eroded Bend Pipe and Prediction of Erosion Reduced Thickness

    PubMed Central

    Zhu, Hongjun; Feng, Guang; Wang, Qijun

    2014-01-01

    Accurate prediction of erosion thickness is essential for pipe engineering. The objective of the present paper is to study the temperature distribution in an eroded bend pipe and find a new method to predict the erosion reduced thickness. Computational fluid dynamic (CFD) simulations with FLUENT software are carried out to investigate the temperature field. And effects of oil inlet rate, oil inlet temperature, and erosion reduced thickness are examined. The presence of erosion pit brings about the obvious fluctuation of temperature drop along the extrados of bend. And the minimum temperature drop presents at the most severe erosion point. Small inlet temperature or large inlet velocity can lead to small temperature drop, while shallow erosion pit causes great temperature drop. The dimensionless minimum temperature drop is analyzed and the fitting formula is obtained. Using the formula we can calculate the erosion reduced thickness, which is only needed to monitor the outer surface temperature of bend pipe. This new method can provide useful guidance for pipeline monitoring and replacement. PMID:24719576

  2. Geothermal well behaviour prediction after air compress stimulation using one-dimensional transient numerical modelling

    NASA Astrophysics Data System (ADS)

    Yusman, W.; Viridi, S.; Rachmat, S.

    2016-01-01

    The non-discharges geothermal wells have been a main problem in geothermal development stages and well discharge stimulation is required to initiate a flow. Air compress stimulation is one of the methods to trigger a fluid flow from the geothermal reservoir. The result of this process can be predicted by using by the Af / Ac method, but sometimes this method shows uncertainty result in several geothermal wells and also this prediction method does not take into account the flowing time of geothermal fluid to discharge after opening the well head. This paper presents a simulation of non-discharges well under air compress stimulation to predict well behavior and time process required. The component of this model consists of geothermal well data during heating-up process such as pressure, temperature and mass flow in the water column and main feed zone level. The one-dimensional transient numerical model is run based on the Single Fluid Volume Element (SFVE) method. According to the simulation result, the geothermal well behavior prediction after air compress stimulation will be valid under two specific circumstances, such as single phase fluid density between 1 - 28 kg/m3 and above 28.5 kg/m3. The first condition shows that successful well discharge and the last condition represent failed well discharge after air compress stimulation (only for two wells data). The comparison of pf values between simulation and field observation shows the different result according to the success discharge well. Time required for flow to occur as observed in well head by using the SFVE method is different with the actual field condition. This model needs to improve by updating more geothermal well data and modified fluid phase condition inside the wellbore.

  3. A numerical tool for reproducing driver behaviour: experiments and predictive simulations.

    PubMed

    Casucci, M; Marchitto, M; Cacciabue, P C

    2010-03-01

    This paper presents the simulation tool called SDDRIVE (Simple Simulation of Driver performance), which is the numerical computerised implementation of the theoretical architecture describing Driver-Vehicle-Environment (DVE) interactions, contained in Cacciabue and Carsten [Cacciabue, P.C., Carsten, O. A simple model of driver behaviour to sustain design and safety assessment of automated systems in automotive environments, 2010]. Following a brief description of the basic algorithms that simulate the performance of drivers, the paper presents and discusses a set of experiments carried out in a Virtual Reality full scale simulator for validating the simulation. Then the predictive potentiality of the tool is shown by discussing two case studies of DVE interactions, performed in the presence of different driver attitudes in similar traffic conditions. PMID:19249745

  4. Methods of sequential estimation for determining initial data in numerical weather prediction. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Cohn, S. E.

    1982-01-01

    Numerical weather prediction (NWP) is an initial-value problem for a system of nonlinear differential equations, in which initial values are known incompletely and inaccurately. Observational data available at the initial time must therefore be supplemented by data available prior to the initial time, a problem known as meteorological data assimilation. A further complication in NWP is that solutions of the governing equations evolve on two different time scales, a fast one and a slow one, whereas fast scale motions in the atmosphere are not reliably observed. This leads to the so called initialization problem: initial values must be constrained to result in a slowly evolving forecast. The theory of estimation of stochastic dynamic systems provides a natural approach to such problems. For linear stochastic dynamic models, the Kalman-Bucy (KB) sequential filter is the optimal data assimilation method, for linear models, the optimal combined data assimilation-initialization method is a modified version of the KB filter.

  5. Defect reaction network in Si-doped InP : numerical predictions.

    SciTech Connect

    Schultz, Peter Andrew

    2013-10-01

    This Report characterizes the defects in the defect reaction network in silicon-doped, n-type InP deduced from first principles density functional theory. The reaction network is deduced by following exothermic defect reactions starting with the initially mobile interstitial defects reacting with common displacement damage defects in Si-doped InP until culminating in immobile reaction products. The defect reactions and reaction energies are tabulated, along with the properties of all the silicon-related defects in the reaction network. This Report serves to extend the results for intrinsic defects in SAND 2012-3313: %E2%80%9CSimple intrinsic defects in InP: Numerical predictions%E2%80%9D to include Si-containing simple defects likely to be present in a radiation-induced defect reaction sequence.

  6. Numerical prediction of pressure fluctuations in a prototype pump turbine base on PANS methods

    NASA Astrophysics Data System (ADS)

    Liu, J. T.; Li, Y.; Gao, Y.; Hu, Q.; Wu, Y. L.

    2016-05-01

    Unsteady flow and pressure fluctuations within a prototypel pump turbine are numerically studied using a nonlinear Partial Averaged Navier Stokes (PANS) model. Pump turbine operating at different conditions with guide vanes opening angle 6° is simulated. Results revealed that the predictions of performance and relative peak-to-peak amplitude by PANS approach agree well with the experimental data. The amplitude of the pressure fluctuation in the vaneless space at turbine mode on a “S” curve increases with the decrease of the flow rate, and it has maximum value when it runs close to runaway line at turbine braking mode. The amplitude of the pressure fluctuation in the vaneless space at turbine braking mode on a “S” curve decreases with the reduce of the flow rate. The above high pressure fluctuations should be avoided during the design of pump turbines especially those operating at high-head condition.

  7. Numerical approaches for predicting two-photon absorption induced single-event effects in semiconductors

    NASA Astrophysics Data System (ADS)

    Hales, Joel M.; Khachatrian, Ani; Roche, Nicolas J.; Buchner, Stephen; Warner, Jeffrey; McMorrow, Dale

    2016-05-01

    Two numerical approaches for determining the charge generated in semiconductors via two-photon absorption (2PA) under conditions relevant for laser-based single-event effects (SEE) experiments are presented. The first approach uses a simple analytical expression incorporating a small number of experimental/material parameters while the second approach employs a comprehensive beam propagation method that accounts for all the complex nonlinear optical (NLO) interactions present. The impact of the excitation conditions, device geometry, and specific NLO interactions on the resulting collected charge in silicon devices is also discussed. These approaches can provide value to the radiation-effects community by predicting the impacts that varying experimental parameters will have on 2PA SEE measurements.

  8. Numerical prediction of film cooling effectiveness over flat plate using variable turbulent prandtl number closures

    NASA Astrophysics Data System (ADS)

    Ochrymiuk, Tomasz

    2016-06-01

    Numerical simulations were performed to predict the film cooling effectiveness on the fiat plate with a three- dimensional discrete-hole film cooling arrangement. The effects of basic geometrical characteristics of the holes, i.e. diameter D, length L and pitch S/D were studied. Different turbulent heat transfer models based on constant and variable turbulent Prandtl number approaches were considered. The variability of the turbulent Prandtl number Pr t in the energy equation was assumed using an algebraic relation proposed by Kays and Crawford, or employing the Abe, Kondoh and Nagano eddy heat diffusivity closure with two differential transport equations for the temperature variance k θ and its destruction rate ɛ θ . The obtained numerical results were directly compared with the data that came from an experiment based on Transient Liquid Crystal methodology. All implemented models for turbulent heat transfer performed sufficiently well for the considered case. It was confirmed, however, that the two- equation closure can give a detailed look into film cooling problems without using any time-consuming and inherently unsteady models.

  9. Numerical approximation abilities correlate with and predict informal but not formal mathematics abilities

    PubMed Central

    Libertus, Melissa E.; Feigenson, Lisa; Halberda, Justin

    2013-01-01

    Previous research has found a relationship between individual differences in children’s precision when nonverbally approximating quantities and their school mathematics performance. School mathematics performance emerges from both informal (e.g., counting) and formal (e.g., knowledge of mathematics facts) abilities. It remains unknown whether approximation precision relates to both of these types of mathematics abilities. In the present study we assessed the precision of numerical approximation in 85 3- to 7-year-old children four times over a span of two years. Additionally, at the last time point, we tested children’s informal and formal mathematics abilities using the Test of Early Mathematics Ability (TEMA-3; Ginsburg & Baroody, 2003). We found that children’s numerical approximation precision correlated with and predicted their informal, but not formal, mathematics abilities when controlling for age and IQ. These results add to our growing understanding of the relationship between an unlearned, non-symbolic system of quantity representation and the system of mathematical reasoning that children come to master through instruction. PMID:24076381

  10. Verification of Numerical Weather Prediction Model Results for Energy Applications in Latvia

    NASA Astrophysics Data System (ADS)

    Sīle, Tija; Cepite-Frisfelde, Daiga; Sennikovs, Juris; Bethers, Uldis

    2014-05-01

    A resolution to increase the production and consumption of renewable energy has been made by EU governments. Most of the renewable energy in Latvia is produced by Hydroelectric Power Plants (HPP), followed by bio-gas, wind power and bio-mass energy production. Wind and HPP power production is sensitive to meteorological conditions. Currently the basis of weather forecasting is Numerical Weather Prediction (NWP) models. There are numerous methodologies concerning the evaluation of quality of NWP results (Wilks 2011) and their application can be conditional on the forecast end user. The goal of this study is to evaluate the performance of Weather Research and Forecast model (Skamarock 2008) implementation over the territory of Latvia, focusing on forecasting of wind speed and quantitative precipitation forecasts. The target spatial resolution is 3 km. Observational data from Latvian Environment, Geology and Meteorology Centre are used. A number of standard verification metrics are calculated. The sensitivity to the model output interpretation (output spatial interpolation versus nearest gridpoint) is investigated. For the precipitation verification the dichotomous verification metrics are used. Sensitivity to different precipitation accumulation intervals is examined. Skamarock, William C. and Klemp, Joseph B. A time-split nonhydrostatic atmospheric model for weather research and forecasting applications. Journal of Computational Physics. 227, 2008, pp. 3465-3485. Wilks, Daniel S. Statistical Methods in the Atmospheric Sciences. Third Edition. Academic Press, 2011.

  11. Evaluating aerosol impacts on Numerical Weather Prediction in two extreme dust and biomass-burning events

    NASA Astrophysics Data System (ADS)

    Remy, Samuel; Benedetti, Angela; Jones, Luke; Razinger, Miha; Haiden, Thomas

    2014-05-01

    The WMO-sponsored Working Group on Numerical Experimentation (WGNE) set up a project aimed at understanding the importance of aerosols for numerical weather prediction (NWP). Three cases are being investigated by several NWP centres with aerosol capabilities: a severe dust case that affected Southern Europe in April 2012, a biomass burning case in South America in September 2012, and an extreme pollution event in Beijing (China) which took place in January 2013. At ECMWF these cases are being studied using the MACC-II system with radiatively interactive aerosols. Some preliminary results related to the dust and the fire event will be presented here. A preliminary verification of the impact of the aerosol-radiation direct interaction on surface meteorological parameters such as 2m Temperature and surface winds over the region of interest will be presented. Aerosol optical depth (AOD) verification using AERONET data will also be discussed. For the biomass burning case, the impact of using injection heights estimated by a Plume Rise Model (PRM) for the biomass burning emissions will be presented.

  12. Numerical approximation abilities correlate with and predict informal but not formal mathematics abilities.

    PubMed

    Libertus, Melissa E; Feigenson, Lisa; Halberda, Justin

    2013-12-01

    Previous research has found a relationship between individual differences in children's precision when nonverbally approximating quantities and their school mathematics performance. School mathematics performance emerges from both informal (e.g., counting) and formal (e.g., knowledge of mathematics facts) abilities. It remains unknown whether approximation precision relates to both of these types of mathematics abilities. In the current study, we assessed the precision of numerical approximation in 85 3- to 7-year-old children four times over a span of 2years. In addition, at the final time point, we tested children's informal and formal mathematics abilities using the Test of Early Mathematics Ability (TEMA-3). We found that children's numerical approximation precision correlated with and predicted their informal, but not formal, mathematics abilities when controlling for age and IQ. These results add to our growing understanding of the relationship between an unlearned nonsymbolic system of quantity representation and the system of mathematics reasoning that children come to master through instruction.

  13. aPPRove: An HMM-Based Method for Accurate Prediction of RNA-Pentatricopeptide Repeat Protein Binding Events.

    PubMed

    Harrison, Thomas; Ruiz, Jaime; Sloan, Daniel B; Ben-Hur, Asa; Boucher, Christina

    2016-01-01

    Pentatricopeptide repeat containing proteins (PPRs) bind to RNA transcripts originating from mitochondria and plastids. There are two classes of PPR proteins. The [Formula: see text] class contains tandem [Formula: see text]-type motif sequences, and the [Formula: see text] class contains alternating [Formula: see text], [Formula: see text] and [Formula: see text] type sequences. In this paper, we describe a novel tool that predicts PPR-RNA interaction; specifically, our method, which we call aPPRove, determines where and how a [Formula: see text]-class PPR protein will bind to RNA when given a PPR and one or more RNA transcripts by using a combinatorial binding code for site specificity proposed by Barkan et al. Our results demonstrate that aPPRove successfully locates how and where a PPR protein belonging to the [Formula: see text] class can bind to RNA. For each binding event it outputs the binding site, the amino-acid-nucleotide interaction, and its statistical significance. Furthermore, we show that our method can be used to predict binding events for [Formula: see text]-class proteins using a known edit site and the statistical significance of aligning the PPR protein to that site. In particular, we use our method to make a conjecture regarding an interaction between CLB19 and the second intronic region of ycf3. The aPPRove web server can be found at www.cs.colostate.edu/~approve. PMID:27560805

  14. aPPRove: An HMM-Based Method for Accurate Prediction of RNA-Pentatricopeptide Repeat Protein Binding Events

    PubMed Central

    Harrison, Thomas; Ruiz, Jaime; Sloan, Daniel B.; Ben-Hur, Asa; Boucher, Christina

    2016-01-01

    Pentatricopeptide repeat containing proteins (PPRs) bind to RNA transcripts originating from mitochondria and plastids. There are two classes of PPR proteins. The P class contains tandem P-type motif sequences, and the PLS class contains alternating P, L and S type sequences. In this paper, we describe a novel tool that predicts PPR-RNA interaction; specifically, our method, which we call aPPRove, determines where and how a PLS-class PPR protein will bind to RNA when given a PPR and one or more RNA transcripts by using a combinatorial binding code for site specificity proposed by Barkan et al. Our results demonstrate that aPPRove successfully locates how and where a PPR protein belonging to the PLS class can bind to RNA. For each binding event it outputs the binding site, the amino-acid-nucleotide interaction, and its statistical significance. Furthermore, we show that our method can be used to predict binding events for PLS-class proteins using a known edit site and the statistical significance of aligning the PPR protein to that site. In particular, we use our method to make a conjecture regarding an interaction between CLB19 and the second intronic region of ycf3. The aPPRove web server can be found at www.cs.colostate.edu/~approve. PMID:27560805

  15. Advancing predictive models for particulate formation in turbulent flames via massively parallel direct numerical simulations

    PubMed Central

    Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz

    2014-01-01

    Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs. PMID:25024412

  16. Advancing predictive models for particulate formation in turbulent flames via massively parallel direct numerical simulations.

    PubMed

    Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz

    2014-08-13

    Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs.

  17. Advancing predictive models for particulate formation in turbulent flames via massively parallel direct numerical simulations.

    PubMed

    Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz

    2014-08-13

    Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs. PMID:25024412

  18. Numerical prediction of flow induced fibers orientation in injection molded polymer composites

    NASA Astrophysics Data System (ADS)

    Oumer, A. N.; Hamidi, N. M.; Mat Sahat, I.

    2015-12-01

    Since the filling stage of injection molding process has important effect on the determination of the orientation state of the fibers, accurate analysis of the flow field for the mold filling stage becomes a necessity. The aim of the paper is to characterize the flow induced orientation state of short fibers in injection molding cavities. A dog-bone shaped model is considered for the simulation and experiment. The numerical model for determination of the fibers orientation during mold-filling stage of injection molding process was solved using Computational Fluid Dynamics (CFD) software called MoldFlow. Both the simulation and experimental results showed that two different regions (or three layers of orientation structures) across the thickness of the specimen could be found: a shell region which is near to the mold cavity wall, and a core region at the middle of the cross section. The simulation results support the experimental observations that for thin plates the probability of fiber alignment to the flow direction near the mold cavity walls is high but low at the core region. It is apparent that the results of this study could assist in decisions regarding short fiber reinforced polymer composites.

  19. Aerothermal and aeroelastic response prediction of aerospace structures in high-speed flows using direct numerical simulation

    NASA Astrophysics Data System (ADS)

    Ostoich, Christopher Mark

    due to a dome-induced horseshoe vortex scouring the panel's surface. Comparisons with reduced-order models of heat transfer indicate that they perform with varying levels of accuracy around some portions of the geometry while completely failing to predict significant heat loads in re- gions where the dome-influenced flow impacts the ceramic panel. Cumulative effects of flow-thermal coupling at later simulation times on the reduction of panel drag and surface heat transfer are quantified. The second fluid-structure study investigates the interaction between a thin metallic panel and a Mach 2.25 turbulent boundary layer with an ini- tial momentum thickness Reynolds number of 1200. A transient, non-linear, large deformation, 3D finite element solver is developed to compute the dynamic response of the panel. The solver is coupled at the fluid-structure interface with the compressible Navier-Stokes solver, the latter of which is used for a direct numerical simulation of the turbulent boundary layer. In this approach, no simplifying assumptions regarding the structural solution or turbulence modeling are made in order to get detailed solution data. It is found that the thin panel state evolves into a flutter type response char- acterized by high-amplitude, high-frequency oscillations into the flow. The oscillating panel disturbs the supersonic flow by introducing compression waves, modifying the turbulence, and generating fluctuations in the power exiting the top of the flow domain. The work in this thesis serves as a step forward in structural response prediction in high-speed flows. The results demonstrate the ability of high- fidelity numerical approaches to serve as a guide for reduced-order model improvement and as well as provide accurate and detailed solution data in scenarios where experimental approaches are difficult or impossible.

  20. IrisPlex: a sensitive DNA tool for accurate prediction of blue and brown eye colour in the absence of ancestry information.

    PubMed

    Walsh, Susan; Liu, Fan; Ballantyne, Kaye N; van Oven, Mannis; Lao, Oscar; Kayser, Manfred

    2011-06-01

    A new era of 'DNA intelligence' is arriving in forensic biology, due to the impending ability to predict externally visible characteristics (EVCs) from biological material such as those found at crime scenes. EVC prediction from forensic samples, or from body parts, is expected to help concentrate police investigations towards finding unknown individuals, at times when conventional DNA profiling fails to provide informative leads. Here we present a robust and sensitive tool, termed IrisPlex, for the accurate prediction of blue and brown eye colour from DNA in future forensic applications. We used the six currently most eye colour-informative single nucleotide polymorphisms (SNPs) that previously revealed prevalence-adjusted prediction accuracies of over 90% for blue and brown eye colour in 6168 Dutch Europeans. The single multiplex assay, based on SNaPshot chemistry and capillary electrophoresis, both widely used in forensic laboratories, displays high levels of genotyping sensitivity with complete profiles generated from as little as 31pg of DNA, approximately six human diploid cell equivalents. We also present a prediction model to correctly classify an individual's eye colour, via probability estimation solely based on DNA data, and illustrate the accuracy of the developed prediction test on 40 individuals from various geographic origins. Moreover, we obtained insights into the worldwide allele distribution of these six SNPs using the HGDP-CEPH samples of 51 populations. Eye colour prediction analyses from HGDP-CEPH samples provide evidence that the test and model presented here perform reliably without prior ancestry information, although future worldwide genotype and phenotype data shall confirm this notion. As our IrisPlex eye colour prediction test is capable of immediate implementation in forensic casework, it represents one of the first steps forward in the creation of a fully individualised EVC prediction system for future use in forensic DNA intelligence.

  1. Accurate ab initio prediction of propagation rate coefficients in free-radical polymerization: Acrylonitrile and vinyl chloride

    NASA Astrophysics Data System (ADS)

    Izgorodina, Ekaterina I.; Coote, Michelle L.

    2006-05-01

    A systematic methodology for calculating accurate propagation rate coefficients in free-radical polymerization was designed and tested for vinyl chloride and acrylonitrile polymerization. For small to medium-sized polymer systems, theoretical reaction barriers are calculated using G3(MP2)-RAD. For larger systems, G3(MP2)-RAD barriers can be approximated (to within 1 kJ mol -1) via an ONIOM-based approach in which the core is studied at G3(MP2)-RAD and the substituent effects are modeled with ROMP2/6-311+G(3df,2p). DFT methods (including BLYP, B3LYP, MPWB195, BB1K and MPWB1K) failed to reproduce the correct trends in the reaction barriers and enthalpies with molecular size, though KMLYP showed some promise as a low cost option for very large systems. Reaction rates are calculated via standard transition state theory in conjunction with the one-dimensional hindered rotor model. The harmonic oscillator approximation was shown to introduce an error of a factor of 2-3, and would be suitable for "order-of-magnitude" estimates. A systematic study of chain length effects indicated that rate coefficients had largely converged to their long chain limit at the dimer radical stage, and the inclusion of the primary substituent of the penultimate unit was sufficient for practical purposes. Solvent effects, as calculated using the COSMO model, were found to be relatively minor. The overall methodology reproduced the available experimental data for both of these monomers within a factor of 2.

  2. Accurate prediction of secreted substrates and identification of a conserved putative secretion signal for type III secretion systems

    SciTech Connect

    Samudrala, Ram; Heffron, Fred; McDermott, Jason E.

    2009-04-24

    The type III secretion system is an essential component for virulence in many Gram-negative bacteria. Though components of the secretion system apparatus are conserved, its substrates, effector proteins, are not. We have used a machine learning approach to identify new secreted effectors. The method integrates evolutionary measures, such as the pattern of homologs in a range of other organisms, and sequence-based features, such as G+C content, amino acid composition and the N-terminal 30 residues of the protein sequence. The method was trained on known effectors from Salmonella typhimurium and validated on a corresponding set of effectors from Pseudomonas syringae, after eliminating effectors with detectable sequence similarity. The method was able to identify all of the known effectors in P. syringae with a specificity of 84% and sensitivity of 82%. The reciprocal validation, training on P. syringae and validating on S. typhimurium, gave similar results with a specificity of 86% when the sensitivity level was 87%. These results show that type III effectors in disparate organisms share common features. We found that maximal performance is attained by including an N-terminal sequence of only 30 residues, which agrees with previous studies indicating that this region contains the secretion signal. We then used the method to define the most important residues in this putative secretion signal. Finally, we present novel predictions of secreted effectors in S. typhimurium, some of which have been experimentally validated, and apply the method to predict secreted effectors in the genetically intractable human pathogen Chlamydia trachomatis. This approach is a novel and effective way to identify secreted effectors in a broad range of pathogenic bacteria for further experimental characterization and provides insight into the nature of the type III secretion signal.

  3. Numerical Prediction of the Thermodynamic Properties of Ternary Al-Ni-Pd Alloys

    NASA Astrophysics Data System (ADS)

    Zagula-Yavorska, Maryana; Romanowska, Jolanta; Kotowski, Sławomir; Sieniawski, Jan

    2016-01-01

    Thermodynamic properties of ternary Al-Ni-Pd system, such as exGAlNPd, µAl(AlNiPd), µNi(AlNiPd) and µPd(AlNiPd) at 1,373 K, were predicted on the basis of thermodynamic properties of binary systems included in the investigated ternary system. The idea of predicting exGAlNiPd values was regarded as calculation of values of the exG function inside a certain area (a Gibbs triangle) unless all boundary conditions, that is values of exG on all legs of the triangle are known (exGAlNi, exGAlPd, exGNiPd). This approach is contrary to finding a function value outside a certain area, if the function value inside this area is known. exG and LAl,Ni,Pd ternary interaction parameters in the Muggianu extension of the Redlich-Kister formalism were calculated numerically using the Excel program and Solver. The accepted values of the third component xx differed from 0.01 to 0.1 mole fraction. Values of LAlNiPd parameters in the Redlich-Kister formula are different for different xx values, but values of thermodynamic functions: exGAlNiPd, µAl(AlNiPd), µNi(AlNiPd) and µPd(AlNiPd) do not differ significantly for different xx values. The choice of xx value does not influence the accuracy of calculations.

  4. Automatic Earthquake Shear Stress Measurement Method Developed for Accurate Time- Prediction Analysis of Forthcoming Major Earthquakes Along Shallow Active Faults

    NASA Astrophysics Data System (ADS)

    Serata, S.

    2006-12-01

    The Serata Stressmeter has been developed to measure and monitor earthquake shear stress build-up along shallow active faults. The development work made in the past 25 years has established the Stressmeter as an automatic stress measurement system to study timing of forthcoming major earthquakes in support of the current earthquake prediction studies based on statistical analysis of seismological observations. In early 1982, a series of major Man-made earthquakes (magnitude 4.5-5.0) suddenly occurred in an area over deep underground potash mine in Saskatchewan, Canada. By measuring underground stress condition of the mine, the direct cause of the earthquake was disclosed. The cause was successfully eliminated by controlling the stress condition of the mine. The Japanese government was interested in this development and the Stressmeter was introduced to the Japanese government research program for earthquake stress studies. In Japan the Stressmeter was first utilized for direct measurement of the intrinsic lateral tectonic stress gradient G. The measurement, conducted at the Mt. Fuji Underground Research Center of the Japanese government, disclosed the constant natural gradients of maximum and minimum lateral stresses in an excellent agreement with the theoretical value, i.e., G = 0.25. All the conventional methods of overcoring, hydrofracturing and deformation, which were introduced to compete with the Serata method, failed demonstrating the fundamental difficulties of the conventional methods. The intrinsic lateral stress gradient determined by the Stressmeter for the Japanese government was found to be the same with all the other measurements made by the Stressmeter in Japan. The stress measurement results obtained by the major international stress measurement work in the Hot Dry Rock Projects conducted in USA, England and Germany are found to be in good agreement with the Stressmeter results obtained in Japan. Based on this broad agreement, a solid geomechanical

  5. Predicting College Students' First Year Success: Should Soft Skills Be Taken into Consideration to More Accurately Predict the Academic Achievement of College Freshmen?

    ERIC Educational Resources Information Center

    Powell, Erica Dion

    2013-01-01

    This study presents a survey developed to measure the skills of entering college freshmen in the areas of responsibility, motivation, study habits, literacy, and stress management, and explores the predictive power of this survey as a measure of academic performance during the first semester of college. The survey was completed by 334 incoming…

  6. Predicting Antimicrobial Resistance Prevalence and Incidence from Indicators of Antimicrobial Use: What Is the Most Accurate Indicator for Surveillance in Intensive Care Units?

    PubMed Central

    Fortin, Élise; Platt, Robert W.; Fontela, Patricia S.; Buckeridge, David L.; Quach, Caroline

    2015-01-01

    Objective The optimal way to measure antimicrobial use in hospital populations, as a complement to surveillance of resistance is still unclear. Using respiratory isolates and antimicrobial prescriptions of nine intensive care units (ICUs), this study aimed to identify the indicator of antimicrobial use that predicted prevalence and incidence rates of resistance with the best accuracy. Methods Retrospective cohort study including all patients admitted to three neonatal (NICU), two pediatric (PICU) and four adult ICUs between April 2006 and March 2010. Ten different resistance / antimicrobial use combinations were studied. After adjustment for ICU type, indicators of antimicrobial use were successively tested in regression models, to predict resistance prevalence and incidence rates, per 4-week time period, per ICU. Binomial regression and Poisson regression were used to model prevalence and incidence rates, respectively. Multiplicative and additive models were tested, as well as no time lag and a one 4-week-period time lag. For each model, the mean absolute error (MAE) in prediction of resistance was computed. The most accurate indicator was compared to other indicators using t-tests. Results Results for all indicators were equivalent, except for 1/20 scenarios studied. In this scenario, where prevalence of carbapenem-resistant Pseudomonas sp. was predicted with carbapenem use, recommended daily doses per 100 admissions were less accurate than courses per 100 patient-days (p = 0.0006). Conclusions A single best indicator to predict antimicrobial resistance might not exist. Feasibility considerations such as ease of computation or potential external comparisons could be decisive in the choice of an indicator for surveillance of healthcare antimicrobial use. PMID:26710322

  7. Microdosing of a Carbon-14 Labeled Protein in Healthy Volunteers Accurately Predicts Its Pharmacokinetics at Therapeutic Dosages.

    PubMed

    Vlaming, M L H; van Duijn, E; Dillingh, M R; Brands, R; Windhorst, A D; Hendrikse, N H; Bosgra, S; Burggraaf, J; de Koning, M C; Fidder, A; Mocking, J A J; Sandman, H; de Ligt, R A F; Fabriek, B O; Pasman, W J; Seinen, W; Alves, T; Carrondo, M; Peixoto, C; Peeters, P A M; Vaes, W H J

    2015-08-01

    Preclinical development of new biological entities (NBEs), such as human protein therapeutics, requires considerable expenditure of time and costs. Poor prediction of pharmacokinetics in humans further reduces net efficiency. In this study, we show for the first time that pharmacokinetic data of NBEs in humans can be successfully obtained early in the drug development process by the use of microdosing in a small group of healthy subjects combined with ultrasensitive accelerator mass spectrometry (AMS). After only minimal preclinical testing, we performed a first-in-human phase 0/phase 1 trial with a human recombinant therapeutic protein (RESCuing Alkaline Phosphatase, human recombinant placental alkaline phosphatase [hRESCAP]) to assess its safety and kinetics. Pharmacokinetic analysis showed dose linearity from microdose (53 μg) [(14) C]-hRESCAP to therapeutic doses (up to 5.3 mg) of the protein in healthy volunteers. This study demonstrates the value of a microdosing approach in a very small cohort for accelerating the clinical development of NBEs. PMID:25869840

  8. A new accurate ground-state potential energy surface of ethylene and predictions for rotational and vibrational energy levels

    NASA Astrophysics Data System (ADS)

    Delahaye, Thibault; Nikitin, Andrei; Rey, Michaël; Szalay, Péter G.; Tyuterev, Vladimir G.

    2014-09-01

    In this paper we report a new ground state potential energy surface for ethylene (ethene) C2H4 obtained from extended ab initio calculations. The coupled-cluster approach with the perturbative inclusion of the connected triple excitations CCSD(T) and correlation consistent polarized valence basis set cc-pVQZ was employed for computations of electronic ground state energies. The fit of the surface included 82 542 nuclear configurations using sixth order expansion in curvilinear symmetry-adapted coordinates involving 2236 parameters. A good convergence for variationally computed vibrational levels of the C2H4 molecule was obtained with a RMS(Obs.-Calc.) deviation of 2.7 cm-1 for fundamental bands centers and 5.9 cm-1 for vibrational bands up to 7800 cm-1. Large scale vibrational and rotational calculations for 12C2H4, 13C2H4, and 12C2D4 isotopologues were performed using this new surface. Energy levels for J = 20 up to 6000 cm-1 are in a good agreement with observations. This represents a considerable improvement with respect to available global predictions of vibrational levels of 13C2H4 and 12C2D4 and rovibrational levels of 12C2H4.

  9. Accurate Predictions of Mean Geomagnetic Dipole Excursion and Reversal Frequencies, Mean Paleomagnetic Field Intensity, and the Radius of Earth's Core Using McLeod's Rule

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.; Conrad, Joy

    1996-01-01

    The geomagnetic spatial power spectrum R(sub n)(r) is the mean square magnetic induction represented by degree n spherical harmonic coefficients of the internal scalar potential averaged over the geocentric sphere of radius r. McLeod's Rule for the magnetic field generated by Earth's core geodynamo says that the expected core surface power spectrum (R(sub nc)(c)) is inversely proportional to (2n + 1) for 1 less than n less than or equal to N(sub E). McLeod's Rule is verified by locating Earth's core with main field models of Magsat data; the estimated core radius of 3485 kn is close to the seismologic value for c of 3480 km. McLeod's Rule and similar forms are then calibrated with the model values of R(sub n) for 3 less than or = n less than or = 12. Extrapolation to the degree 1 dipole predicts the expectation value of Earth's dipole moment to be about 5.89 x 10(exp 22) Am(exp 2)rms (74.5% of the 1980 value) and the expected geomagnetic intensity to be about 35.6 (mu)T rms at Earth's surface. Archeo- and paleomagnetic field intensity data show these and related predictions to be reasonably accurate. The probability distribution chi(exp 2) with 2n+1 degrees of freedom is assigned to (2n + 1)R(sub nc)/(R(sub nc). Extending this to the dipole implies that an exceptionally weak absolute dipole moment (less than or = 20% of the 1980 value) will exist during 2.5% of geologic time. The mean duration for such major geomagnetic dipole power excursions, one quarter of which feature durable axial dipole reversal, is estimated from the modern dipole power time-scale and the statistical model of excursions. The resulting mean excursion duration of 2767 years forces us to predict an average of 9.04 excursions per million years, 2.26 axial dipole reversals per million years, and a mean reversal duration of 5533 years. Paleomagnetic data show these predictions to be quite accurate. McLeod's Rule led to accurate predictions of Earth's core radius, mean paleomagnetic field

  10. Integrating metabolic performance, thermal tolerance, and plasticity enables for more accurate predictions on species vulnerability to acute and chronic effects of global warming.

    PubMed

    Magozzi, Sarah; Calosi, Piero

    2015-01-01

    Predicting species vulnerability to global warming requires a comprehensive, mechanistic understanding of sublethal and lethal thermal tolerances. To date, however, most studies investigating species physiological responses to increasing temperature have focused on the underlying physiological traits of either acute or chronic tolerance in isolation. Here we propose an integrative, synthetic approach including the investigation of multiple physiological traits (metabolic performance and thermal tolerance), and their plasticity, to provide more accurate and balanced predictions on species and assemblage vulnerability to both acute and chronic effects of global warming. We applied this approach to more accurately elucidate relative species vulnerability to warming within an assemblage of six caridean prawns occurring in the same geographic, hence macroclimatic, region, but living in different thermal habitats. Prawns were exposed to four incubation temperatures (10, 15, 20 and 25 °C) for 7 days, their metabolic rates and upper thermal limits were measured, and plasticity was calculated according to the concept of Reaction Norms, as well as Q10 for metabolism. Compared to species occupying narrower/more stable thermal niches, species inhabiting broader/more variable thermal environments (including the invasive Palaemon macrodactylus) are likely to be less vulnerable to extreme acute thermal events as a result of their higher upper thermal limits. Nevertheless, they may be at greater risk from chronic exposure to warming due to the greater metabolic costs they incur. Indeed, a trade-off between acute and chronic tolerance was apparent in the assemblage investigated. However, the invasive species P. macrodactylus represents an exception to this pattern, showing elevated thermal limits and plasticity of these limits, as well as a high metabolic control. In general, integrating multiple proxies for species physiological acute and chronic responses to increasing

  11. Infectious titres of sheep scrapie and bovine spongiform encephalopathy agents cannot be accurately predicted from quantitative laboratory test results.

    PubMed

    González, Lorenzo; Thorne, Leigh; Jeffrey, Martin; Martin, Stuart; Spiropoulos, John; Beck, Katy E; Lockey, Richard W; Vickery, Christopher M; Holder, Thomas; Terry, Linda

    2012-11-01

    It is widely accepted that abnormal forms of the prion protein (PrP) are the best surrogate marker for the infectious agent of prion diseases and, in practice, the detection of such disease-associated (PrP(d)) and/or protease-resistant (PrP(res)) forms of PrP is the cornerstone of diagnosis and surveillance of the transmissible spongiform encephalopathies (TSEs). Nevertheless, some studies question the consistent association between infectivity and abnormal PrP detection. To address this discrepancy, 11 brain samples of sheep affected with natural scrapie or experimental bovine spongiform encephalopathy were selected on the basis of the magnitude and predominant types of PrP(d) accumulation, as shown by immunohistochemical (IHC) examination; contra-lateral hemi-brain samples were inoculated at three different dilutions into transgenic mice overexpressing ovine PrP and were also subjected to quantitative analysis by three biochemical tests (BCTs). Six samples gave 'low' infectious titres (10⁶·⁵ to 10⁶·⁷ LD₅₀ g⁻¹) and five gave 'high titres' (10⁸·¹ to ≥ 10⁸·⁷ LD₅₀ g⁻¹) and, with the exception of the Western blot analysis, those two groups tended to correspond with samples with lower PrP(d)/PrP(res) results by IHC/BCTs. However, no statistical association could be confirmed due to high individual sample variability. It is concluded that although detection of abnormal forms of PrP by laboratory methods remains useful to confirm TSE infection, infectivity titres cannot be predicted from quantitative test results, at least for the TSE sources and host PRNP genotypes used in this study. Furthermore, the near inverse correlation between infectious titres and Western blot results (high protease pre-treatment) argues for a dissociation between infectivity and PrP(res).

  12. A new accurate ground-state potential energy surface of ethylene and predictions for rotational and vibrational energy levels

    SciTech Connect

    Delahaye, Thibault Rey, Michaël Tyuterev, Vladimir G.; Nikitin, Andrei; Szalay, Péter G.

    2014-09-14

    In this paper we report a new ground state potential energy surface for ethylene (ethene) C{sub 2}H{sub 4} obtained from extended ab initio calculations. The coupled-cluster approach with the perturbative inclusion of the connected triple excitations CCSD(T) and correlation consistent polarized valence basis set cc-pVQZ was employed for computations of electronic ground state energies. The fit of the surface included 82 542 nuclear configurations using sixth order expansion in curvilinear symmetry-adapted coordinates involving 2236 parameters. A good convergence for variationally computed vibrational levels of the C{sub 2}H{sub 4} molecule was obtained with a RMS(Obs.–Calc.) deviation of 2.7 cm{sup −1} for fundamental bands centers and 5.9 cm{sup −1} for vibrational bands up to 7800 cm{sup −1}. Large scale vibrational and rotational calculations for {sup 12}C{sub 2}H{sub 4}, {sup 13}C{sub 2}H{sub 4}, and {sup 12}C{sub 2}D{sub 4} isotopologues were performed using this new surface. Energy levels for J = 20 up to 6000 cm{sup −1} are in a good agreement with observations. This represents a considerable improvement with respect to available global predictions of vibrational levels of {sup 13}C{sub 2}H{sub 4} and {sup 12}C{sub 2}D{sub 4} and rovibrational levels of {sup 12}C{sub 2}H{sub 4}.

  13. Stable, high-order SBP-SAT finite difference operators to enable accurate simulation of compressible turbulent flows on curvilinear grids, with application to predicting turbulent jet noise

    NASA Astrophysics Data System (ADS)

    Byun, Jaeseung; Bodony, Daniel; Pantano, Carlos

    2014-11-01

    Improved order-of-accuracy discretizations often require careful consideration of their numerical stability. We report on new high-order finite difference schemes using Summation-By-Parts (SBP) operators along with the Simultaneous-Approximation-Terms (SAT) boundary condition treatment for first and second-order spatial derivatives with variable coefficients. In particular, we present a highly accurate operator for SBP-SAT-based approximations of second-order derivatives with variable coefficients for Dirichlet and Neumann boundary conditions. These terms are responsible for approximating the physical dissipation of kinetic and thermal energy in a simulation, and contain grid metrics when the grid is curvilinear. Analysis using the Laplace transform method shows that strong stability is ensured with Dirichlet boundary conditions while weaker stability is obtained for Neumann boundary conditions. Furthermore, the benefits of the scheme is shown in the direct numerical simulation (DNS) of a Mach 1.5 compressible turbulent supersonic jet using curvilinear grids and skew-symmetric discretization. Particularly, we show that the improved methods allow minimization of the numerical filter often employed in these simulations and we discuss the qualities of the simulation.

  14. Noncontrast computed tomography can predict the outcome of shockwave lithotripsy via accurate stone measurement and abdominal fat distribution determination.

    PubMed

    Geng, Jiun-Hung; Tu, Hung-Pin; Shih, Paul Ming-Chen; Shen, Jung-Tsung; Jang, Mei-Yu; Wu, Wen-Jen; Li, Ching-Chia; Chou, Yii-Her; Juan, Yung-Shun

    2015-01-01

    Urolithiasis is a common disease of the urinary system. Extracorporeal shockwave lithotripsy (SWL) has become one of the standard treatments for renal and ureteral stones; however, the success rates range widely and failure of stone disintegration may cause additional outlay, alternative procedures, and even complications. We used the data available from noncontrast abdominal computed tomography (NCCT) to evaluate the impact of stone parameters and abdominal fat distribution on calculus-free rates following SWL. We retrospectively reviewed 328 patients who had urinary stones and had undergone SWL from August 2012 to August 2013. All of them received pre-SWL NCCT; 1 month after SWL, radiography was arranged to evaluate the condition of the fragments. These patients were classified into stone-free group and residual stone group. Unenhanced computed tomography variables, including stone attenuation, abdominal fat area, and skin-to-stone distance (SSD) were analyzed. In all, 197 (60%) were classified as stone-free and 132 (40%) as having residual stone. The mean ages were 49.35 ± 13.22 years and 55.32 ± 13.52 years, respectively. On univariate analysis, age, stone size, stone surface area, stone attenuation, SSD, total fat area (TFA), abdominal circumference, serum creatinine, and the severity of hydronephrosis revealed statistical significance between these two groups. From multivariate logistic regression analysis, the independent parameters impacting SWL outcomes were stone size, stone attenuation, TFA, and serum creatinine. [Adjusted odds ratios and (95% confidence intervals): 9.49 (3.72-24.20), 2.25 (1.22-4.14), 2.20 (1.10-4.40), and 2.89 (1.35-6.21) respectively, all p < 0.05]. In the present study, stone size, stone attenuation, TFA and serum creatinine were four independent predictors for stone-free rates after SWL. These findings suggest that pretreatment NCCT may predict the outcomes after SWL. Consequently, we can use these predictors for selecting

  15. Prospect of Using Numerical Dynamo Model for Prediction of Geomagnetic Secular Variation

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Tangborn, Andrew

    2003-01-01

    Modeling of the Earth's core has reached a level of maturity to where the incorporation of observations into the simulations through data assimilation has become feasible. Data assimilation is a method by which observations of a system are combined with a model output (or forecast) to obtain a best guess of the state of the system, called the analysis. The analysis is then used as an initial condition for the next forecast. By doing assimilation, not only we shall be able to predict partially secular variation of the core field, we could also use observations to further our understanding of dynamical states in the Earth's core. One of the first steps in the development of an assimilation system is a comparison between the observations and the model solution. The highly turbulent nature of core dynamics, along with the absence of any regular external forcing and constraint (which occurs in atmospheric dynamics, for example) means that short time comparisons (approx. 1000 years) cannot be made between model and observations. In order to make sensible comparisons, a direct insertion assimilation method has been implemented. In this approach, magnetic field observations at the Earth's surface have been substituted into the numerical model, such that the ratio of the multiple components and the dipole component from observation is adjusted at the core-mantle boundary and extended to the interior of the core, while the total magnetic energy remains unchanged. This adjusted magnetic field is then used as the initial field for a new simulation. In this way, a time tugged simulation is created which can then be compared directly with observations. We present numerical solutions with and without data insertion and discuss their implications for the development of a more rigorous assimilation system.

  16. Two dimensional numerical prediction of deflagration-to-detonation transition in porous energetic materials.

    PubMed

    Narin, B; Ozyörük, Y; Ulas, A

    2014-05-30

    This paper describes a two-dimensional code developed for analyzing two-phase deflagration-to-detonation transition (DDT) phenomenon in granular, energetic, solid, explosive ingredients. The two-dimensional model is constructed in full two-phase, and based on a highly coupled system of partial differential equations involving basic flow conservation equations and some constitutive relations borrowed from some one-dimensional studies that appeared in open literature. The whole system is solved using an optimized high-order accurate, explicit, central-difference scheme with selective-filtering/shock capturing (SF-SC) technique, to augment central-diffencing and prevent excessive dispersion. The sources of the equations describing particle-gas interactions in terms of momentum and energy transfers make the equation system quite stiff, and hence its explicit integration difficult. To ease the difficulties, a time-split approach is used allowing higher time steps. In the paper, the physical model for the sources of the equation system is given for a typical explosive, and several numerical calculations are carried out to assess the developed code. Microscale intergranular and/or intragranular effects including pore collapse, sublimation, pyrolysis, etc. are not taken into account for ignition and growth, and a basic temperature switch is applied in calculations to control ignition in the explosive domain. Results for one-dimensional DDT phenomenon are in good agreement with experimental and computational results available in literature. A typical shaped-charge wave-shaper case study is also performed to test the two-dimensional features of the code and it is observed that results are in good agreement with those of commercial software. PMID:24721693

  17. Two dimensional numerical prediction of deflagration-to-detonation transition in porous energetic materials.

    PubMed

    Narin, B; Ozyörük, Y; Ulas, A

    2014-05-30

    This paper describes a two-dimensional code developed for analyzing two-phase deflagration-to-detonation transition (DDT) phenomenon in granular, energetic, solid, explosive ingredients. The two-dimensional model is constructed in full two-phase, and based on a highly coupled system of partial differential equations involving basic flow conservation equations and some constitutive relations borrowed from some one-dimensional studies that appeared in open literature. The whole system is solved using an optimized high-order accurate, explicit, central-difference scheme with selective-filtering/shock capturing (SF-SC) technique, to augment central-diffencing and prevent excessive dispersion. The sources of the equations describing particle-gas interactions in terms of momentum and energy transfers make the equation system quite stiff, and hence its explicit integration difficult. To ease the difficulties, a time-split approach is used allowing higher time steps. In the paper, the physical model for the sources of the equation system is given for a typical explosive, and several numerical calculations are carried out to assess the developed code. Microscale intergranular and/or intragranular effects including pore collapse, sublimation, pyrolysis, etc. are not taken into account for ignition and growth, and a basic temperature switch is applied in calculations to control ignition in the explosive domain. Results for one-dimensional DDT phenomenon are in good agreement with experimental and computational results available in literature. A typical shaped-charge wave-shaper case study is also performed to test the two-dimensional features of the code and it is observed that results are in good agreement with those of commercial software.

  18. Some Techniques for the Objective Analysis of Humidity for Regional Scale Numerical Weather Prediction.

    NASA Astrophysics Data System (ADS)

    Rasmussen, Robert Gary

    Several topics relating to the objective analysis of humidity for regional scale numerical weather prediction are investigated. These include: (1) sampling the humidity field; (2) choosing an analysis scheme; (3) choosing an analysis variable; (4) using surface data to diagnose upper -air humidity (SFC-DIAG); (5) using cloud analysis data to diagnose surface and upper-air humidities (3DNEPH-DIAG); and (6) modeling the humidity lateral autocorrelation function. Regression equations for the diagnosed humidities and several correlation models are developed and validated. Four types of data are used in a preliminary demonstration: observations (radiosonde and surface), SFC-DIAG data, 3DNEPH-DIAG data, and forecast data from the Drexel/NCAR Limited-Area and Mesoscale Prediction System (LAMPS). The major conclusions are: (1) independent samples of relative humidity can be obtained by sampling at intervals of two days and 1750 km, on the average; (2) Gandin's optimum interpolation (OI) is preferable to Cressman's successive correction and Panofsky's surface fitting schemes; (3) relative humidity (RH) is a better analysis variable than dew-point depression; (4) RH*, the square root of (1-RH), is better than RH; (5) both surface and cloud analysis data can be used to diagnose the upper-air humidity; (6) pooling dense data prior to OI analysis can improve the quality of the analysis and reduce its computational burden; (7) iteratively pooling data is economical; (8) for the types of data considered, use of more than about eight data in an OI point analysis cannot be justified by expectations of further reducing the analysis error variance; and (9) the statistical model in OI is faulty in that an analyzed humidity can be biased too much toward the first guess.

  19. Impact of aerosols on the forecast accuracy of solar irradiance calculated by a numerical weather prediction model

    NASA Astrophysics Data System (ADS)

    Shimose, Ken-ichi; Ohtake, Hideaki; Fonseca, Joao Gari da Silva; Takashima, Takumi; Oozeki, Takashi; Yamada, Yoshinori

    2014-10-01

    The impact of aerosols on the forecast accuracy of solar irradiance calculated by a fine-scale, one day-ahead, and operational numerical weather prediction model (NWP) is investigated in this study. In order to investigate the impact of aerosols only, the clear sky period is chosen, which is defined as when there are no clouds in the observation data and in the forecast data at the same time. The evaluation of the forecast accuracy of the solar irradiance is done at a single observation point that is sometimes affected by aerosol events. The analysis period is one year from April 2010 to March 2011. During the clear sky period, the root mean square errors (RMSE) of the global horizontal irradiance (GHI), direct normal irradiance (DNI), and diffuse horizontal irradiance (DHI) are 40.0 W m-2, 84.0 Wm-2, and 47.9 W m-2, respectively. During one extreme event, the RMSEs of the GHI, DNI, and DHI are 70.1 W m-2, 211.6 W m-2, and 141.7 W m-2, respectively. It is revealed that the extreme events were caused by aerosols such as dust or haze. In order to investigate the impact of the aerosols, the sensitivity experiments of the aerosol optical depth (AOD) for the extreme events are executed. The best result is obtained by changing the AOD to 2.5 times the original AOD. This changed AOD is consistent with the satellite observation. Thus, it is our conclusion that an accurate aerosol forecast is important for the forecast accuracy of the solar irradiance.

  20. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    PubMed

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-02-24

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  1. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    PubMed Central

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  2. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    PubMed

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  3. Profile-QSAR: a novel meta-QSAR method that combines activities across the kinase family to accurately predict affinity, selectivity, and cellular activity.

    PubMed

    Martin, Eric; Mukherjee, Prasenjit; Sullivan, David; Jansen, Johanna

    2011-08-22

    Profile-QSAR is a novel 2D predictive model building method for kinases. This "meta-QSAR" method models the activity of each compound against a new kinase target as a linear combination of its predicted activities against a large panel of 92 previously studied kinases comprised from 115 assays. Profile-QSAR starts with a sparse incomplete kinase by compound (KxC) activity matrix, used to generate Bayesian QSAR models for the 92 "basis-set" kinases. These Bayesian QSARs generate a complete "synthetic" KxC activity matrix of predictions. These synthetic activities are used as "chemical descriptors" to train partial-least squares (PLS) models, from modest amounts of medium-throughput screening data, for predicting activity against new kinases. The Profile-QSAR predictions for the 92 kinases (115 assays) gave a median external R²(ext) = 0.59 on 25% held-out test sets. The method has proven accurate enough to predict pairwise kinase selectivities with a median correlation of R²(ext) = 0.61 for 958 kinase pairs with at least 600 common compounds. It has been further expanded by adding a "C(k)XC" cellular activity matrix to the KxC matrix to predict cellular activity for 42 kinase driven cellular assays with median R²(ext) = 0.58 for 24 target modulation assays and R²(ext) = 0.41 for 18 cell proliferation assays. The 2D Profile-QSAR, along with the 3D Surrogate AutoShim, are the foundations of an internally developed iterative medium-throughput screening (IMTS) methodology for virtual screening (VS) of compound archives as an alternative to experimental high-throughput screening (HTS). The method has been applied to 20 actual prospective kinase projects. Biological results have so far been obtained in eight of them. Q² values ranged from 0.3 to 0.7. Hit-rates at 10 uM for experimentally tested compounds varied from 25% to 80%, except in K5, which was a special case aimed specifically at finding "type II" binders, where none of the compounds were predicted to be

  4. Numerical prediction of transition of the F-16 wing at supersonic speeds

    NASA Technical Reports Server (NTRS)

    Cummings, Russell M.

    1993-01-01

    This work is part of the high speed research program currently underway at NASA. This project has the goal of gaining understanding of the technical requirements for supersonic-hypersonic flight. Specifically, this research is part of a continuing project to study the laminar flow over swept wings at high speeds and involves the numerical prediction of the flow about the F-16XL wing. The research uses the CNS/ARC3D codes and the resulting crossflow velocity components in order to estimate transition locations on the wing. Effects of angle of attack on the extent of laminar flows was found to be minimal. This result can be attributed to the fact that a laminar flow airfoil was used in this study, which has a continuous favorable pressure gradient over approximately the first 20 percent of the chord for angles of attacks up to 10 degrees. It should also be noted that even after 20 percent chord the pressure gradient either slowly continued to increase, but never decreased before 90 percent chord, except for the higher swept cases when separation occurs. Angles of attack greater than 10 degrees were not considered since this study assumes natural laminar flow for normal supersonic cruise flight conditions.

  5. Locating the Turbulent Gray Zone in High-Resolution Numerical Weather Prediction

    NASA Astrophysics Data System (ADS)

    Simon, J. S.; Zhou, B.; Chow, F. K.

    2015-12-01

    The turbulent gray zone, or terra incognita, is a range of grid resolutions where the grid is too coarse to use a large eddy simulation (LES), but too fine to use a one-dimensional planetary boundary layer (PBL) scheme. The presence of the gray zone is a problem for numerical weather prediction (NWP) practitioners, both in a research and operational capacity. Generally, the gray zone is considered to span from O(100 m) to O(1 km), however these limits are just approximations. The inadequacies of turbulence models in the gray zone has been shown to have a considerable influence on resolved-scale dynamics. As computational resources become more available, higher resolution atmospheric models are inevitable, so it is important that guidelines exist for choosing an appropriate grid-resolution. Here we consider the behavior of LES closures at intermediate resolutions and deduce some basic criteria for a properly-resolved atmospheric LES. Idealized scenarios for convective and sheared atmospheric cases are considered first, followed by a real case in the Southern Great Plains.

  6. Predicting geomorphic evolution through integration of numerical-model scenarios and topographic/bathymetric-survey updates

    NASA Astrophysics Data System (ADS)

    Plant, N. G.; Long, J.; Dalyander, S.; Thompson, D.; Miselis, J. L.

    2013-12-01

    Natural resource and hazard management of barrier islands requires an understanding of geomorphic changes associated with long-term processes and storms. Uncertainty exists in understanding how long-term processes interact with the geomorphic changes caused by storms and the resulting perturbations of the long-term evolution trajectories. We use high-resolution data sets to initialize and correct high-fidelity numerical simulations of oceanographic forcing and resulting barrier island evolution. We simulate two years of observed storms to determine the individual and cumulative impacts of these events. Results are separated into cross-shore and alongshore components of sediment transport and compared with observed topographic and bathymetric changes during these time periods. The discrete island change induced by these storms is integrated with previous knowledge of long-term net alongshore sediment transport to project island evolution. The approach has been developed and tested using data collected at the Chandeleur Island chain off the coast of Louisiana (USA). The simulation time period included impacts from tropical and winter storms, as well as a human-induced perturbation associated with construction of a sand berm along the island shoreline. The predictions and observations indicated that storm and long-term processes both contribute to the migration, lowering, and disintegration of the artificial berm and natural island. Further analysis will determine the relative importance of cross-shore and alongshore sediment transport processes and the dominant time scales that drive each of these processes and subsequent island morphologic response.

  7. A case study of GOES-15 imager bias characterization with a numerical weather prediction model

    NASA Astrophysics Data System (ADS)

    Ren, Lu

    2016-09-01

    The infrared imager onboard the Geostationary Operational Environmental Satellite 15 (GOES-15) provides temporally continuous observations over a limited spatial domain. To quantify bias of the GOES-15 imager, observations from four infrared channels (2, 3, 4, and 6) are compared with simulations from the numerical weather prediction model and radiative transfer model. One-day clear-sky infrared observations from the GOES-15 imager over an oceanic domain during nighttime are selected. Two datasets, Global Forecast System (GFS) analysis and ERAInterim reanalysis, are used as inputs to the radiative transfer model. The results show that magnitudes of biases for the GOES-15 surface channels are approximately 1 K using two datasets, whereas the magnitude of bias for the GOES-15 water vapor channel can reach 5.5 K using the GFS dataset and 2.5 K using the ERA dataset. The GOES- 15 surface channels show positive dependencies on scene temperature, whereas the water vapor channel has a weak dependence on scene temperature. The strong dependence of bias on sensor zenith angle for the GOES-15 water vapor channel using GFS analysis implies large biases might exist in GFS water vapor profiles.

  8. Post Processing Numerical Weather Prediction Model Rainfall Forecasts for Use in Ensemble Streamflow Forecasting in Australia

    NASA Astrophysics Data System (ADS)

    Shrestha, D. L.; Robertson, D.; Bennett, J.; Ward, P.; Wang, Q. J.

    2012-12-01

    Through the water information research and development alliance (WIRADA) project, CSIRO is conducting research to improve flood and short-term streamflow forecasting services delivered by the Australian Bureau of Meteorology. WIRADA aims to build and test systems to generate ensemble flood and short-term streamflow forecasts with lead times of up to 10 days by integrating rainfall forecasts from Numerical Weather Prediction (NWP) models and hydrological modelling. Here we present an overview of the latest progress towards developing this system. Rainfall during the forecast period is a major source of uncertainty in streamflow forecasting. Ensemble rainfall forecasts are used in streamflow forecasting to characterise the rainfall uncertainty. In Australia, NWP models provide forecasts of rainfall and other weather conditions for lead times of up to 10 days. However, rainfall forecasts from Australian NWP models are deterministic and often contain systematic errors. We use a simplified Bayesian joint probability (BJP) method to post-process rainfall forecasts from the latest generation of Australian NWP models. The BJP method generates reliable and skilful ensemble rainfall forecasts. The post-processed rainfall ensembles are then used to force a semi-distributed conceptual rainfall runoff model to produce ensemble streamflow forecasts. The performance of the ensemble streamflow forecasts is evaluated on a number of Australian catchments and the benefits of using post processed rainfall forecasts are demonstrated.

  9. Identifying Precipitation Types Using Dual-Polarization-Based Radar and Numerical Weather Prediction Model Data

    NASA Astrophysics Data System (ADS)

    Seo, B. C.; Bradley, A.; Krajewski, W. F.

    2015-12-01

    The recent upgrade of dual-polarization with NEXRAD radars has assisted in improving the characterization of microphysical processes in precipitation and thus has enabled precipitation estimation based on the identified precipitation types. While this polarimetric capability promises the potential for the enhanced accuracy in quantitative precipitation estimation (QPE), recent studies show that the polarimetric estimates are still affected by uncertainties arising from the radar beam geometry/sampling space associated with the vertical variability of precipitation. The authors, first of all, focus on evaluating the NEXRAD hydrometeor classification product using ground reference data (e.g., ASOS) that provide simple categories of the observed precipitation types (e.g., rain, snow, and freezing rain). They also investigate classification uncertainty features caused by the variability of precipitation between the ground and the altitudes where radar samples. Since this variability is closely related to the atmospheric conditions (e.g., temperature) at near surface, useful information (e.g., critical thickness and temperature profile) that is not available in radar observations is retrieved from the numerical weather prediction (NWP) model data such as Rapid Refresh (RAP)/High Resolution Rapid Refresh (HRRR). The NWP retrieved information and polarimetric radar data are used together to improve the accuracy of precipitation type identification at near surface. The authors highlight major improvements and discuss limitations in the real-time application.

  10. Overview of numerical codes developed for predicted electrothermal deicing of aircraft blades

    NASA Technical Reports Server (NTRS)

    Keith, Theo G.; De Witt, Kenneth J.; Wright, William B.; Masiulaniec, K. Cyril

    1988-01-01

    An overview of the deicing computer codes that have been developed at the University of Toledo under sponsorship of the NASA-Lewis Research Center is presented. These codes simulate the transient heat conduction and phase change occurring in an electrothermal deicier pad that has an arbitrary accreted ice shape on its surface. The codes are one-dimensional rectangular, two-dimensional rectangular, and two-dimensional with a coordinate transformation to model the true blade geometry. All modifications relating to the thermal physics of the deicing problem that have been incorporated into the codes will be discussed. Recent results of reformulating the codes using different numerical methods to increase program efficiency are described. In particular, this reformulation has enabled a more comprehensive two-dimensional code to run in much less CPU time than the original version. The code predictions are compared with experimental data obtained in the NASA-Lewis Icing Research Tunnel with a UH1H blade fitted with a B. F. Goodrich electrothermal deicer pad. Both continuous and cyclic heater firing cases are considered. The major objective in this comparison is to illustrate which codes give acceptable results in different regions of the airfoil for different heater firing sequences.

  11. Homogenizing surface pressure time-series from operational numerical weather prediction models for geodetic applications

    NASA Astrophysics Data System (ADS)

    Dobslaw, H.

    2016-07-01

    Global surface pressure grids from 14.5 years of 6-hourly analyses out of both the operational ECMWF weather prediction model and ERA-Interim are mapped to a common reference orography by means of ECMWF's mean sea-level pressure diagnostic. The approach reduces both relative biases and residual variability by about one order of magnitude and thereby achieves a consistency among both data sets at the level of about 1 hPa. Remaining differences rather reflect temperature biases and also resolution limitations of the reanalysis data set, but are not anymore related to the local roughness in orography or to changes in the spatial resolution of the operational model. The presented reduction method therefore allows to obtain surface pressure time series with the long-time consistency of a reanalysis from an operational numerical weather model with much higher resolution and much shorter latency, making the results suitable for geodetic near realtime applications requiring continuously updated time series that are homogeneous over many years.

  12. Alfvenic Turbulence from the Sun to 65 Solar Radii: Numerical predictions.

    NASA Astrophysics Data System (ADS)

    Perez, J. C.; Chandran, B. D. G.

    2015-12-01

    The upcoming NASA Solar Probe Plus (SPP) mission will fly to within 9 solar radii from the solar surface, about 7 times closer to the Sun than any previous spacecraft has ever reached. This historic mission will gather unprecedented remote-sensing data and the first in-situ measurements of the plasma in the solar atmosphere, which will revolutionize our knowledge and understanding of turbulence and other processes that heat the solar corona and accelerate the solar wind. This close to the Sun the background solar-wind properties are highly inhomogeneous. As a result, outward-propagating Alfven waves (AWs) arising from the random motions of the photospheric magnetic-field footpoints undergo strong non-WKB reflections and trigger a vigorous turbulent cascade. In this talk I will discuss recent progress in the understanding of reflection-driven Alfven turbulence in this scenario by means of high-resolution numerical simulations, with the goal of predicting the detailed nature of the velocity and magnetic field fluctuations that the SPP mission will measure. In particular, I will place special emphasis on relating the simulations to relevant physical mechanisms that might govern the radial evolution of the turbulence spectra of outward/inward-propagating fluctuations and discuss the conditions that lead to universal power-laws.

  13. Validation of numerical prediction of dynamic derivatives: The DLR-F12 and the Transcruiser test cases

    NASA Astrophysics Data System (ADS)

    Mialon, Bruno; Khrabrov, Alex; Khelil, Saloua Ben; Huebner, Andreas; Da Ronch, Andrea; Badcock, Ken; Cavagna, Luca; Eliasson, Peter; Zhang, Mengmeng; Ricci, Sergio; Jouhaud, Jean-Christophe; Rogé, Gilbert; Hitzel, Stephan; Lahuta, Martin

    2011-11-01

    The dynamic derivatives are widely used in linear aerodynamic models in order to determine the flying qualities of an aircraft: the ability to predict them reliably, quickly and sufficiently early in the design process is vital in order to avoid late and costly component redesigns. This paper describes experimental and computational research dealing with the determination of dynamic derivatives carried out within the FP6 European project SimSAC. Numerical and experimental results are compared for two aircraft configurations: a generic civil transport aircraft, wing-fuselage-tail configuration called the DLR-F12 and a generic Transonic CRuiser, which is a canard configuration. Static and dynamic wind tunnel tests have been carried out for both configurations and are briefly described within this paper. The data generated for both the DLR-F12 and TCR configurations include force and pressure coefficients obtained during small amplitude pitch, roll and yaw oscillations while the data for the TCR configuration also include large amplitude oscillations, in order to investigate the dynamic effects on nonlinear aerodynamic characteristics. In addition, dynamic derivatives have been determined for both configurations with a large panel of tools, from linear aerodynamic (Vortex Lattice Methods) to CFD. This work confirms that an increase in fidelity level enables the dynamic derivatives to be calculated more accurately. Linear aerodynamics tools are shown to give satisfactory results but are very sensitive to the geometry/mesh input data. Although all the quasi-steady CFD approaches give comparable results (robustness) for steady dynamic derivatives, they do not allow the prediction of unsteady components for the dynamic derivatives (angular derivatives with respect to time): this can be done with either a fully unsteady approach i.e. with a time-marching scheme or with frequency domain solvers, both of which provide comparable results for the DLR-F12 test case. As far as

  14. Accurate prediction of hard-sphere virial coefficients B6 to B12 from a compressibility-based equation of state

    NASA Astrophysics Data System (ADS)

    Hansen-Goos, Hendrik

    2016-04-01

    We derive an analytical equation of state for the hard-sphere fluid that is within 0.01% of computer simulations for the whole range of the stable fluid phase. In contrast, the commonly used Carnahan-Starling equation of state deviates by up to 0.3% from simulations. The derivation uses the functional form of the isothermal compressibility from the Percus-Yevick closure of the Ornstein-Zernike relation as a starting point. Two additional degrees of freedom are introduced, which are constrained by requiring the equation of state to (i) recover the exact fourth virial coefficient B4 and (ii) involve only integer coefficients on the level of the ideal gas, while providing best possible agreement with the numerical result for B5. Virial coefficients B6 to B10 obtained from the equation of state are within 0.5% of numerical computations, and coefficients B11 and B12 are within the error of numerical results. We conjecture that even higher virial coefficients are reliably predicted.

  15. Unprecedently Large-Scale Kinase Inhibitor Set Enabling the Accurate Prediction of Compound-Kinase Activities: A Way toward Selective Promiscuity by Design?

    PubMed

    Christmann-Franck, Serge; van Westen, Gerard J P; Papadatos, George; Beltran Escudie, Fanny; Roberts, Alexander; Overington, John P; Domine, Daniel

    2016-09-26

    Drug discovery programs frequently target members of the human kinome and try to identify small molecule protein kinase inhibitors, primarily for cancer treatment, additional indications being increasingly investigated. One of the challenges is controlling the inhibitors degree of selectivity, assessed by in vitro profiling against panels of protein kinases. We manually extracted, compiled, and standardized such profiles published in the literature: we collected 356 908 data points corresponding to 482 protein kinases, 2106 inhibitors, and 661 patents. We then analyzed this data set in terms of kinome coverage, results reproducibility, popularity, and degree of selectivity of both kinases and inhibitors. We used the data set to create robust proteochemometric models capable of predicting kinase activity (the ligand-target space was modeled with an externally validated RMSE of 0.41 ± 0.02 log units and R02 0.74 ± 0.03), in order to account for missing or unreliable measurements. The influence on the prediction quality of parameters such as number of measurements, Murcko scaffold frequency or inhibitor type was assessed. Interpretation of the models enabled to highlight inhibitors and kinases properties correlated with higher affinities, and an analysis in the context of kinases crystal structures was performed. Overall, the models quality allows the accurate prediction of kinase-inhibitor activities and their structural interpretation, thus paving the way for the rational design of compounds with a targeted selectivity profile.

  16. Unprecedently Large-Scale Kinase Inhibitor Set Enabling the Accurate Prediction of Compound–Kinase Activities: A Way toward Selective Promiscuity by Design?

    PubMed Central

    2016-01-01

    Drug discovery programs frequently target members of the human kinome and try to identify small molecule protein kinase inhibitors, primarily for cancer treatment, additional indications being increasingly investigated. One of the challenges is controlling the inhibitors degree of selectivity, assessed by in vitro profiling against panels of protein kinases. We manually extracted, compiled, and standardized such profiles published in the literature: we collected 356 908 data points corresponding to 482 protein kinases, 2106 inhibitors, and 661 patents. We then analyzed this data set in terms of kinome coverage, results reproducibility, popularity, and degree of selectivity of both kinases and inhibitors. We used the data set to create robust proteochemometric models capable of predicting kinase activity (the ligand–target space was modeled with an externally validated RMSE of 0.41 ± 0.02 log units and R02 0.74 ± 0.03), in order to account for missing or unreliable measurements. The influence on the prediction quality of parameters such as number of measurements, Murcko scaffold frequency or inhibitor type was assessed. Interpretation of the models enabled to highlight inhibitors and kinases properties correlated with higher affinities, and an analysis in the context of kinases crystal structures was performed. Overall, the models quality allows the accurate prediction of kinase-inhibitor activities and their structural interpretation, thus paving the way for the rational design of compounds with a targeted selectivity profile. PMID:27482722

  17. Unprecedently Large-Scale Kinase Inhibitor Set Enabling the Accurate Prediction of Compound-Kinase Activities: A Way toward Selective Promiscuity by Design?

    PubMed

    Christmann-Franck, Serge; van Westen, Gerard J P; Papadatos, George; Beltran Escudie, Fanny; Roberts, Alexander; Overington, John P; Domine, Daniel

    2016-09-26

    Drug discovery programs frequently target members of the human kinome and try to identify small molecule protein kinase inhibitors, primarily for cancer treatment, additional indications being increasingly investigated. One of the challenges is controlling the inhibitors degree of selectivity, assessed by in vitro profiling against panels of protein kinases. We manually extracted, compiled, and standardized such profiles published in the literature: we collected 356 908 data points corresponding to 482 protein kinases, 2106 inhibitors, and 661 patents. We then analyzed this data set in terms of kinome coverage, results reproducibility, popularity, and degree of selectivity of both kinases and inhibitors. We used the data set to create robust proteochemometric models capable of predicting kinase activity (the ligand-target space was modeled with an externally validated RMSE of 0.41 ± 0.02 log units and R02 0.74 ± 0.03), in order to account for missing or unreliable measurements. The influence on the prediction quality of parameters such as number of measurements, Murcko scaffold frequency or inhibitor type was assessed. Interpretation of the models enabled to highlight inhibitors and kinases properties correlated with higher affinities, and an analysis in the context of kinases crystal structures was performed. Overall, the models quality allows the accurate prediction of kinase-inhibitor activities and their structural interpretation, thus paving the way for the rational design of compounds with a targeted selectivity profile. PMID:27482722

  18. Improving stream temperature model predictions using high-resolution satellite-derived numerical weather forecasts

    NASA Astrophysics Data System (ADS)

    Pike, A.; Danner, E.; Lindley, S.; Melton, F. S.; Nemani, R. R.; Hashimoto, H.; Rajagopalan, B.; Caldwell, R. J.

    2009-12-01

    In the Central Valley of California, stream temperature is a critical indicator of habitat quality for endangered salmonid species and affects re-licensing of major water projects and dam operations worth billions of dollars. However, many water resource-related decisions in regulated rivers rely upon models using a daily-to-monthly mean temperature standard. Furthermore, current water temperature models are limited by the lack of spatially detailed meteorological forecasts. To address this issue, we utilize the coupled TOPS-WRF (Terrestrial Observation and Prediction System - Weather Research and Forecasting) framework—a high-resolution (15min, 1km) assimilation of satellite-derived meteorological observations and numerical weather forecasts— to improve the spatial and temporal resolution of stream temperature predictions. In this study, we developed a high-resolution mechanistic 1-dimensional stream temperature model (sub-hourly time step, sub-kilometer spatial resolution) for the Upper Sacramento River in northern California. The model uses a heat budget approach to calculate the rate of heat transfer to/from the river. Inputs for the heat budget formulation are atmospheric variables provided by the TOPS-WRF model. The hydrodynamics of the river (flow velocity and channel geometry) are characterized using densely-spaced channel cross-sections and flow data. Water temperatures are calculated by considering the hydrologic and thermal characteristics of the river and solving the advection-diffusion equation in a mixed Eulerian-Lagrangian framework. Modeled hindcasted temperatures for a test period (May - November 2008) substantially improve upon the existing daily-to-monthly mean temperature standards. Modeled values closely approximate both the magnitude and the phase of measured water temperatures. Furthermore, our model results reveal important longitudinal patterns in diel temperature variation that are unique to regulated rivers, and may be critical to

  19. The assimilation of hyperspectral satellite radiances in Global Numerical Weather Prediction

    NASA Astrophysics Data System (ADS)

    Jung, James Alan

    Hyperspectral infrared radiance data present opportunities for significant improvements in data assimilation and Numerical Weather Prediction (NWP). The increase in spectral resolution available from the Atmospheric Infrared Sounder (AIRS) sensor, for example, will make it possible to improve the accuracy of temperature and moisture fields. Improved accuracy of the NWP analyses and forecasts should result. In this thesis we incorporate these hyperspectral data, using new assimilation methods, into the National Centers for Environmental Prediction's (NCEP) operational Global Data Assimilation System/Global Forecast System (GDAS/GFS) and investigate their impact on the weather analysis and forecasts. The spatial and spectral resolution of AIRS data used by NWP centers was initially based on theoretical calculations. Synthetic data were used to determine channel selection and spatial density for real time data assimilation. Several problems were previously not fully addressed. These areas include: cloud contamination, surface related issues, dust, and temperature inversions. In this study, several improvements were made to the methods used for assimilation. Spatial resolution was increased to examine every field of view, instead of one in nine or eighteen fields of view. Improved selection criteria were developed to find the best profile for assimilation from a larger sample. New cloud and inversion tests were used to help identify the best profiles to be assimilated in the analysis. The spectral resolution was also increased from 152 to 251 channels. The channels added were mainly near the surface, in the water vapor absorption band, and in the shortwave region. The GFS was run at or near operational resolution and contained all observations available to the operational system. For each experiment the operational version of the GFS was used during that time. The use of full spatial and enhanced spectral resolution data resulted in the first demonstration of

  20. A hybrid numerical prediction scheme for solar radiation estimation in un-gauged catchments.

    NASA Astrophysics Data System (ADS)

    Shamim, M. A.; Bray, M.; Ishak, A. M.; Remesan, R.; Han, D.

    2009-09-01

    The importance of solar radiation on earth's surface is depicted in its wide range of applications in the fields of meteorology, agricultural sciences, engineering, hydrology, crop water requirements, climatic changes and energy assessment. It is quite random in nature as it has to go through different processes of assimilation and dispersion while on its way to earth. Compared to other meteorological parameters, solar radiation is quite infrequently measured, for example, the worldwide ratio of stations collecting solar radiation to those collecting temperature is 1:500 (Badescu, 2008). Researchers, therefore, have to rely on indirect techniques of estimation that include nonlinear models, artificial intelligence (e.g. neural networks), remote sensing and numerical weather predictions (NWP). This study proposes a hybrid numerical prediction scheme for solar radiation estimation in un-gauged catchments. It uses the PSU/NCAR's Mesoscale Modelling system (MM5) (Grell et al., 1995) to parameterise the cloud effect on extraterrestrial radiation by dividing the atmosphere into four layers of very high (6-12 km), high (3-6 km), medium (1.5-3) and low (0-1.5) altitudes from earth. It is believed that various cloud forms exist within each of these layers. An hourly time series of upper air pressure and relative humidity data sets corresponding to all of these layers is determined for the Brue catchment, southwest UK, using MM5. Cloud Index (CI) was then determined using (Yang and Koike, 2002): 1 p?bi [ (Rh - Rh )] ci =------- max 0.0,---------cri dp pbi - ptipti (1- Rhcri) where, pbi and pti represent the air pressure at the top and bottom of each layer and Rhcri is the critical value of relative humidity at which a certain cloud type is formed. Output from a global clear sky solar radiation model (MRM v-5) (Kambezidis and Psiloglu, 2008) is used along with meteorological datasets of temperature and precipitation and astronomical information. The analysis is aided by the

  1. Precipitation forecasting by a mesoscale numerical weather prediction (NWP) model: eight years of experience

    NASA Astrophysics Data System (ADS)

    Kaufmann, P.; Schubiger, F.; Binder, P.

    The Swiss Model, a hydrostatic numerical weather prediction model, has been used at MeteoSwiss for operational forecasting at the meso-beta scale (mesh-size 14 km) from 1994 until 2001. The quality of the quantitative precipitation forecasts is evaluated for the eight years of operation. The seasonal precipitation over Switzerland and its dependence on altitude is examined for both model forecasts and observations using the Swiss rain gauge network sampling daily precipitation at over 400 stations for verification. The mean diurnal cycle of precipitation is verified against the automatic surface observation network on the basis of hourly recordings. In winter, there is no diurnal forcing of precipitation and the modelled precipitation agrees with the observed values. In summer, the convection in the model starts too early, overestimates the amount of precipitation and is too short-lived. Skill scores calculated for six-hourly precipitation sums show a constant level of performance over the model life cycle. Dry and wet seasons influence the model performance more than the model changes during its operational period. The comprehensive verification of the model precipitation is complemented by the discussion of a number of heavy rain events investigated during the RAPHAEL project. The sensitivities to a number of model components are illustrated, namely the driving boundary fields, the internal partitioning of parameterised and grid-scale precipitation, the advection scheme and the vertical resolution. While a small impact of the advection scheme had to be expected, the increasing overprediction of rain with increasing vertical resolution in the RAPHAEL case studies was larger than previously thought. The frequent update of the boundary conditions enhances the positioning of the rain in the model.

  2. Evaluation of numerical weather predictions performed in the context of the project DAPHNE

    NASA Astrophysics Data System (ADS)

    Tegoulias, Ioannis; Pytharoulis, Ioannis; Bampzelis, Dimitris; Karacostas, Theodore

    2014-05-01

    The region of Thessaly in central Greece is one of the main areas of agricultural production in Greece. Severe weather phenomena affect the agricultural production in this region with adverse effects for farmers and the national economy. For this reason the project DAPHNE aims at tackling the problem of drought by means of weather modification through the development of the necessary tools to support the application of a rainfall enhancement program. In the present study the numerical weather prediction system WRF-ARW is used, in order to assess its ability to represent extreme weather phenomena in the region of Thessaly. WRF is integrated in three domains covering Europe, Eastern Mediterranean and Central-Northern Greece (Thessaly and a large part of Macedonia) using telescoping nesting with grid spacing of 15km, 5km and 1.667km, respectively. The cases examined span throughout the transitional and warm period (April to September) of the years 2008 to 2013, including days with thunderstorm activity. Model results are evaluated against all available surface observations and radar products, taking into account the spatial characteristics and intensity of the storms. Preliminary results indicate a good level of agreement between the simulated and observed fields as far as the standard parameters (such as temperature, humidity and precipitation) are concerned. Moreover, the model generally exhibits a potential to represent the occurrence of the convective activity, but not its exact spatiotemporal characteristics. Acknowledgements This research work has been co-financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007-2013)

  3. Numerous Numerals.

    ERIC Educational Resources Information Center

    Henle, James M.

    This pamphlet consists of 17 brief chapters, each containing a discussion of a numeration system and a set of problems on the use of that system. The numeration systems used include Egyptian fractions, ordinary continued fractions and variants of that method, and systems using positive and negative bases. The book is informal and addressed to…

  4. Investigation into a displacement bias in numerical weather prediction models' forecasts of mesoscale convective systems

    NASA Astrophysics Data System (ADS)

    Yost, Charles

    Although often hard to correctly forecast, mesoscale convective systems (MCSs) are responsible for a majority of warm-season, localized extreme rain events. This study investigates displacement errors often observed by forecasters and researchers in the Global Forecast System (GFS) and the North American Mesoscale (NAM) models, in addition to the European Centre for Medium Range Weather Forecasts (ECMWF) and the 4-km convection allowing NSSL-WRF models. Using archived radar data and Stage IV precipitation data from April to August of 2009 to 2011, MCSs were recorded and sorted into unique six-hour intervals. The locations of these MCSs were compared to the associated predicted precipitation field in all models using the Method for Object-Based Diagnostic Evaluation (MODE) tool, produced by the Developmental Testbed Center and verified through manual analysis. A northward bias exists in the location of the forecasts in all lead times of the GFS, NAM, and ECMWF models. The MODE tool found that 74%, 68%, and 65% of the forecasts were too far to the north of the observed rainfall in the GFS, NAM and ECMWF models respectively. The higher-resolution NSSL-WRF model produced a near neutral location forecast error with 52% of the cases too far to the south. The GFS model consistently moved the MCSs too quickly with 65% of the cases located to the east of the observed MCS. The mean forecast displacement error from the GFS and NAM were on average 266 km and 249 km, respectively, while the ECMWF and NSSL-WRF produced a much lower average of 179 km and 158 km. A case study of the Dubuque, IA MCS on 28 July 2011 was analyzed to identify the root cause of this bias. This MCS shattered several rainfall records and required over 50 people to be rescued from mobile home parks from around the area. This devastating MCS, which was a classic Training Line/Adjoining Stratiform archetype, had numerous northward-biased forecasts from all models, which are examined here. As common with

  5. Absolute Measurements of Macrophage Migration Inhibitory Factor and Interleukin-1-β mRNA Levels Accurately Predict Treatment Response in Depressed Patients

    PubMed Central

    Ferrari, Clarissa; Uher, Rudolf; Bocchio-Chiavetto, Luisella; Riva, Marco Andrea; Pariante, Carmine M.

    2016-01-01

    Background: Increased levels of inflammation have been associated with a poorer response to antidepressants in several clinical samples, but these findings have had been limited by low reproducibility of biomarker assays across laboratories, difficulty in predicting response probability on an individual basis, and unclear molecular mechanisms. Methods: Here we measured absolute mRNA values (a reliable quantitation of number of molecules) of Macrophage Migration Inhibitory Factor and interleukin-1β in a previously published sample from a randomized controlled trial comparing escitalopram vs nortriptyline (GENDEP) as well as in an independent, naturalistic replication sample. We then used linear discriminant analysis to calculate mRNA values cutoffs that best discriminated between responders and nonresponders after 12 weeks of antidepressants. As Macrophage Migration Inhibitory Factor and interleukin-1β might be involved in different pathways, we constructed a protein-protein interaction network by the Search Tool for the Retrieval of Interacting Genes/Proteins. Results: We identified cutoff values for the absolute mRNA measures that accurately predicted response probability on an individual basis, with positive predictive values and specificity for nonresponders of 100% in both samples (negative predictive value=82% to 85%, sensitivity=52% to 61%). Using network analysis, we identified different clusters of targets for these 2 cytokines, with Macrophage Migration Inhibitory Factor interacting predominantly with pathways involved in neurogenesis, neuroplasticity, and cell proliferation, and interleukin-1β interacting predominantly with pathways involved in the inflammasome complex, oxidative stress, and neurodegeneration. Conclusion: We believe that these data provide a clinically suitable approach to the personalization of antidepressant therapy: patients who have absolute mRNA values above the suggested cutoffs could be directed toward earlier access to more

  6. Dose Addition Models Based on Biologically Relevant Reductions in Fetal Testosterone Accurately Predict Postnatal Reproductive Tract Alterations by a Phthalate Mixture in Rats.

    PubMed

    Howdeshell, Kembra L; Rider, Cynthia V; Wilson, Vickie S; Furr, Johnathan R; Lambright, Christy R; Gray, L Earl

    2015-12-01

    Challenges in cumulative risk assessment of anti-androgenic phthalate mixtures include a lack of data on all the individual phthalates and difficulty determining the biological relevance of reduction in fetal testosterone (T) on postnatal development. The objectives of the current study were 2-fold: (1) to test whether a mixture model of dose addition based on the fetal T production data of individual phthalates would predict the effects of a 5 phthalate mixture on androgen-sensitive postnatal male reproductive tract development, and (2) to determine the biological relevance of the reductions in fetal T to induce abnormal postnatal reproductive tract development using data from the mixture study. We administered a dose range of the mixture (60, 40, 20, 10, and 5% of the top dose used in the previous fetal T production study consisting of 300 mg/kg per chemical of benzyl butyl (BBP), di(n)butyl (DBP), diethyl hexyl phthalate (DEHP), di-isobutyl phthalate (DiBP), and 100 mg dipentyl (DPP) phthalate/kg; the individual phthalates were present in equipotent doses based on their ability to reduce fetal T production) via gavage to Sprague Dawley rat dams on GD8-postnatal day 3. We compared observed mixture responses to predictions of dose addition based on the previously published potencies of the individual phthalates to reduce fetal T production relative to a reference chemical and published postnatal data for the reference chemical (called DAref). In addition, we predicted DA (called DAall) and response addition (RA) based on logistic regression analysis of all 5 individual phthalates when complete data were available. DA ref and DA all accurately predicted the observed mixture effect for 11 of 14 endpoints. Furthermore, reproductive tract malformations were seen in 17-100% of F1 males when fetal T production was reduced by about 25-72%, respectively. PMID:26350170

  7. Development of the one-stop water resources operational system using data of a numerical weather prediction model

    NASA Astrophysics Data System (ADS)

    Ryoo, K.; Hwang, J.; Suh, A. S.

    2015-12-01

    This research constructs the one-stop water resources operational system that is based on the connection between short-term numerical weather prediction model(LDAPS, UM3.0, RDAPS) of Korea Meteorological Administration (KMA) and runoff model(COSFIM, K-DRUM) of Korea water resources corporation (K-water) to predict runoff discharge ungagged basin which needs to provide flood and evacuation warning. K-DRUM model works online(Receiving weather forecast data, operating K-DRUM model automatically and offering forecast and warning information to flood manager through forecast monitoring system.), and COSFIM model which needs experience of flood manager and realtime condition data works off line(manager operates COSFIM model with weather forecast data and connects with monitoring system to manage water resources). We used the values for evaluating the prediction result, which predicted cumulative value, observed cumulative value, the rate of predicted and observed cumulative values(%), predicted mean value, observed mean value, the deviation and deviation ratio of the predicted mean value and the observed mean value, standard deviation of predicted and observed value, standard deviation ratio of predicted and observed value(%). In addition, we use the index to be used mainly calculated and observed values in model for quantitative reliability assessment. It was used A dimensionless index of NSE (Nash-Sutcliffe Efficiency), statistics techniques of index errors PBIAS (Percent Bias), and the ratio of mean square error and standard deviation for observations RSR (RMSE-observations Standard deviation Ratio). According to the results of analysis, in order to prevent the damage occurred by hydrological disasters that cannot be identified by only observations of rainfall stations, utilizing the numerical weather predictions can be significant factor in the ungagged basin flood control operations management.

  8. Discovery of a general method of solving the Schrödinger and dirac equations that opens a way to accurately predictive quantum chemistry.

    PubMed

    Nakatsuji, Hiroshi

    2012-09-18

    Just as Newtonian law governs classical physics, the Schrödinger equation (SE) and the relativistic Dirac equation (DE) rule the world of chemistry. So, if we can solve these equations accurately, we can use computation to predict chemistry precisely. However, for approximately 80 years after the discovery of these equations, chemists believed that they could not solve SE and DE for atoms and molecules that included many electrons. This Account reviews ideas developed over the past decade to further the goal of predictive quantum chemistry. Between 2000 and 2005, I discovered a general method of solving the SE and DE accurately. As a first inspiration, I formulated the structure of the exact wave function of the SE in a compact mathematical form. The explicit inclusion of the exact wave function's structure within the variational space allows for the calculation of the exact wave function as a solution of the variational method. Although this process sounds almost impossible, it is indeed possible, and I have published several formulations and applied them to solve the full configuration interaction (CI) with a very small number of variables. However, when I examined analytical solutions for atoms and molecules, the Hamiltonian integrals in their secular equations diverged. This singularity problem occurred in all atoms and molecules because it originates from the singularity of the Coulomb potential in their Hamiltonians. To overcome this problem, I first introduced the inverse SE and then the scaled SE. The latter simpler idea led to immediate and surprisingly accurate solution for the SEs of the hydrogen atom, helium atom, and hydrogen molecule. The free complement (FC) method, also called the free iterative CI (free ICI) method, was efficient for solving the SEs. In the FC method, the basis functions that span the exact wave function are produced by the Hamiltonian of the system and the zeroth-order wave function. These basis functions are called complement

  9. Numerical Simulations of Optical Turbulence Using an Advanced Atmospheric Prediction Model: Implications for Adaptive Optics Design

    NASA Astrophysics Data System (ADS)

    Alliss, R.

    2014-09-01

    Optical turbulence (OT) acts to distort light in the atmosphere, degrading imagery from astronomical telescopes and reducing the data quality of optical imaging and communication links. Some of the degradation due to turbulence can be corrected by adaptive optics. However, the severity of optical turbulence, and thus the amount of correction required, is largely dependent upon the turbulence at the location of interest. Therefore, it is vital to understand the climatology of optical turbulence at such locations. In many cases, it is impractical and expensive to setup instrumentation to characterize the climatology of OT, so numerical simulations become a less expensive and convenient alternative. The strength of OT is characterized by the refractive index structure function Cn2, which in turn is used to calculate atmospheric seeing parameters. While attempts have been made to characterize Cn2 using empirical models, Cn2 can be calculated more directly from Numerical Weather Prediction (NWP) simulations using pressure, temperature, thermal stability, vertical wind shear, turbulent Prandtl number, and turbulence kinetic energy (TKE). In this work we use the Weather Research and Forecast (WRF) NWP model to generate Cn2 climatologies in the planetary boundary layer and free atmosphere, allowing for both point-to-point and ground-to-space seeing estimates of the Fried Coherence length (ro) and other seeing parameters. Simulations are performed using a multi-node linux cluster using the Intel chip architecture. The WRF model is configured to run at 1km horizontal resolution and centered on the Mauna Loa Observatory (MLO) of the Big Island. The vertical resolution varies from 25 meters in the boundary layer to 500 meters in the stratosphere. The model top is 20 km. The Mellor-Yamada-Janjic (MYJ) TKE scheme has been modified to diagnose the turbulent Prandtl number as a function of the Richardson number, following observations by Kondo and others. This modification

  10. The Vertical Error Characteristics of GOES-derived Winds: Description and Impact on Numerical Weather Prediction

    NASA Technical Reports Server (NTRS)

    Rao, P. Anil; Velden, Christopher S.; Braun, Scott A.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    Errors in the height assignment of some satellite-derived winds exist because the satellites sense radiation emitted from a finite layer of the atmosphere rather than a specific level. Potential problems in data assimilation may arise because the motion of a measured layer is often represented by a single-level value. In this research, cloud and water vapor motion winds that are derived from the Geostationary Operational Environmental Satellites (GOES winds) are compared to collocated rawinsonde observations (RAOBs). An important aspect of this work is that in addition to comparisons at each assigned height, the GOES winds are compared to the entire profile of the collocated RAOB data to determine the vertical error characteristics of the GOES winds. The impact of these results on numerical weather prediction is then investigated. The comparisons at individual vector height assignments indicate that the error of the GOES winds range from approx. 3 to 10 m/s and generally increase with height. However, if taken as a percentage of the total wind speed, accuracy is better at upper levels. As expected, comparisons with the entire profile of the collocated RAOBs indicate that clear-air water vapor winds represent deeper layers than do either infrared or water vapor cloud-tracked winds. This is because in cloud-free regions the signal from water vapor features may result from emittance over a thicker layer. To further investigate characteristics of the clear-air water vapor winds, they are stratified into two categories that are dependent on the depth of the layer represented by the vector. It is found that if the vertical gradient of moisture is smooth and uniform from near the height assignment upwards, the clear-air water vapor wind tends to represent a relatively deep layer. The information from the comparisons is then used in numerical model simulations of two separate events to determine the forecast impacts. Four simulations are performed for each case: 1) A

  11. Tuning of Strouhal number for high propulsive efficiency accurately predicts how wingbeat frequency and stroke amplitude relate and scale with size and flight speed in birds.

    PubMed Central

    Nudds, Robert L.; Taylor, Graham K.; Thomas, Adrian L. R.

    2004-01-01

    The wing kinematics of birds vary systematically with body size, but we still, after several decades of research, lack a clear mechanistic understanding of the aerodynamic selection pressures that shape them. Swimming and flying animals have recently been shown to cruise at Strouhal numbers (St) corresponding to a regime of vortex growth and shedding in which the propulsive efficiency of flapping foils peaks (St approximately fA/U, where f is wingbeat frequency, U is cruising speed and A approximately bsin(theta/2) is stroke amplitude, in which b is wingspan and theta is stroke angle). We show that St is a simple and accurate predictor of wingbeat frequency in birds. The Strouhal numbers of cruising birds have converged on the lower end of the range 0.2 < St < 0.4 associated with high propulsive efficiency. Stroke angle scales as theta approximately 67b-0.24, so wingbeat frequency can be predicted as f approximately St.U/bsin(33.5b-0.24), with St0.21 and St0.25 for direct and intermittent fliers, respectively. This simple aerodynamic model predicts wingbeat frequency better than any other relationship proposed to date, explaining 90% of the observed variance in a sample of 60 bird species. Avian wing kinematics therefore appear to have been tuned by natural selection for high aerodynamic efficiency: physical and physiological constraints upon wing kinematics must be reconsidered in this light. PMID:15451698

  12. Genome-Scale Metabolic Model for the Green Alga Chlorella vulgaris UTEX 395 Accurately Predicts Phenotypes under Autotrophic, Heterotrophic, and Mixotrophic Growth Conditions.

    PubMed

    Zuñiga, Cristal; Li, Chien-Ting; Huelsman, Tyler; Levering, Jennifer; Zielinski, Daniel C; McConnell, Brian O; Long, Christopher P; Knoshaug, Eric P; Guarnieri, Michael T; Antoniewicz, Maciek R; Betenbaugh, Michael J; Zengler, Karsten

    2016-09-01

    The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. PMID:27372244

  13. Application of Suomi-NPP Green Vegetation Fraction and NUCAPS for Improving Regional Numerical Weather Prediction

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Berndt, Emily B.; Srikishen, Jayanthi; Zavodsky, Bradley T.

    2014-01-01

    The NASA SPoRT Center is working to incorporate Suomi-NPP products into its research and transition activities to improve regional numerical weather prediction (NWP). Specifically, SPoRT seeks to utilize two data products from NOAA/NESDIS: (1) daily global VIIRS green vegetation fraction (GVF), and (2) NOAA Unique CrIS and ATMS Processing System (NUCAPS) temperature and moisture retrieved profiles. The goal of (1) is to improve the representation of vegetation in the Noah land surface model (LSM) over existing climatological GVF datasets in order to improve the land-atmosphere energy exchanges in NWP models and produce better temperature, moisture, and precipitation forecasts. The goal of (2) is to assimilate NUCAPS retrieved profiles into the Gridpoint Statistical Interpolation (GSI) data assimilation system to assess the impact on a summer pre-frontal convection case. Most regional NWP applications make use of a monthly GVF climatology for use in the Noah LSM within the Weather Research and Forecasting (WRF) model. The GVF partitions incoming energy into direct surface heating/evaporation over bare soil versus evapotranspiration processes over vegetated surfaces. Misrepresentations of the fractional coverage of vegetation during anomalous weather/climate regimes (e.g., early/late bloom or freeze; drought) can lead to poor NWP model results when land-atmosphere feedback is important. SPoRT has been producing a daily MODIS GVF product based on the University of Wisconsin Direct Broadcast swaths of Normalized Difference Vegetation Index (NDVI). While positive impacts have been demonstrated in the WRF model for some cases, the reflectances composing these NDVI do not correct for atmospheric aerosols nor satellite view angle, resulting in temporal noisiness at certain locations (especially heavy vegetation). The method behind the NESDIS VIIRS GVF is expected to alleviate the issues seen in the MODIS GVF real-time product, thereby offering a higher-quality dataset for

  14. An online trajectory module (version 1.0) for the nonhydrostatic numerical weather prediction model COSMO

    NASA Astrophysics Data System (ADS)

    Miltenberger, A. K.; Pfahl, S.; Wernli, H.

    2013-11-01

    A module to calculate online trajectories has been implemented into the nonhydrostatic limited-area weather prediction and climate model COSMO. Whereas offline trajectories are calculated with wind fields from model output, which is typically available every one to six hours, online trajectories use the simulated resolved wind field at every model time step (typically less than a minute) to solve the trajectory equation. As a consequence, online trajectories much better capture the short-term temporal fluctuations of the wind field, which is particularly important for mesoscale flows near topography and convective clouds, and they do not suffer from temporal interpolation errors between model output times. The numerical implementation of online trajectories in the COSMO-model is based upon an established offline trajectory tool and takes full account of the horizontal domain decomposition that is used for parallelization of the COSMO-model. Although a perfect workload balance cannot be achieved for the trajectory module (due to the fact that trajectory positions are not necessarily equally distributed over the model domain), the additional computational costs are found to be fairly small for the high-resolution simulations described in this paper. The computational costs may, however, vary strongly depending on the number of trajectories and trace variables. Various options have been implemented to initialize online trajectories at different locations and times during the model simulation. As a first application of the new COSMO-model module, an Alpine north foehn event in summer 1987 has been simulated with horizontal resolutions of 2.2, 7 and 14 km. It is shown that low-tropospheric trajectories calculated offline with one- to six-hourly wind fields can significantly deviate from trajectories calculated online. Deviations increase with decreasing model grid spacing and are particularly large in regions of deep convection and strong orographic flow distortion. On

  15. A Study of the Influence of Numerical Diffusion on Gas-Solid Flow Predictions in Fluidized Beds

    NASA Astrophysics Data System (ADS)

    Ghandriz, Ronak; Sheikhi, Reza

    2015-11-01

    In this work, an investigation is made of the influence of numerical diffusion on the accuracy of gas-solid flow predictions in fluidized beds. This is an important issue particularly in bubbling fluidized beds since numerical error greatly affects the dynamics of bubbles and their associated mixing process. A bed of coal (classified as Geldart A) is considered which becomes fluidized as the velocity of nitrogen stream into the reactor is gradually increased. The fluidization process is simulated using various numerical schemes as well as grid resolutions. Simulations involve Eulerian-Eulerian two-phase flow modeling approach and results are compared with experimental data. It is shown that higher order schemes equipped with flux limiter give favorable prediction of bubble and particle dynamics and hence, the mixing process within the reactor. The excessive numerical diffusion associated with lower order schemes results in unrealistic prediction of bubble shapes and bed height. Comparison is also made of computational efficiency of various schemes. It is shown that the Monotonized Central scheme with down wind factor results in the shortest simulation time because of its efficient parallelization on distributed memory platforms.

  16. Numerical Simulation of Natural Convection of a Nanofluid in an Inclined Heated Enclosure Using Two-Phase Lattice Boltzmann Method: Accurate Effects of Thermophoresis and Brownian Forces.

    PubMed

    Ahmed, Mahmoud; Eslamian, Morteza

    2015-12-01

    Laminar natural convection in differentially heated (β = 0°, where β is the inclination angle), inclined (β = 30° and 60°), and bottom-heated (β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number.

  17. Numerical Simulation of Natural Convection of a Nanofluid in an Inclined Heated Enclosure Using Two-Phase Lattice Boltzmann Method: Accurate Effects of Thermophoresis and Brownian Forces.

    PubMed

    Ahmed, Mahmoud; Eslamian, Morteza

    2015-12-01

    Laminar natural convection in differentially heated (β = 0°, where β is the inclination angle), inclined (β = 30° and 60°), and bottom-heated (β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number. PMID:26183389

  18. The Model for End-stage Liver Disease accurately predicts 90-day liver transplant wait-list mortality in Atlantic Canada

    PubMed Central

    Renfrew, Paul Douglas; Quan, Hude; Doig, Christopher James; Dixon, Elijah; Molinari, Michele

    2011-01-01

    OBJECTIVE: To determine the generalizability of the predictions for 90-day mortality generated by Model for End-stage Liver Disease (MELD) and the serum sodium augmented MELD (MELDNa) to Atlantic Canadian adults with end-stage liver disease awaiting liver transplantation (LT). METHODS: The predictive accuracy of the MELD and the MELDNa was evaluated by measurement of the discrimination and calibration of the respective models’ estimates for the occurrence of 90-day mortality in a consecutive cohort of LT candidates accrued over a five-year period. Accuracy of discrimination was measured by the area under the ROC curves. Calibration accuracy was evaluated by comparing the observed and model-estimated incidences of 90-day wait-list failure for the total cohort and within quantiles of risk. RESULTS: The area under the ROC curve for the MELD was 0.887 (95% CI 0.705 to 0.978) – consistent with very good accuracy of discrimination. The area under the ROC curve for the MELDNa was 0.848 (95% CI 0.681 to 0.965). The observed incidence of 90-day wait-list mortality in the validation cohort was 7.9%, which was not significantly different from the MELD estimate of 6.6% (95% CI 4.9% to 8.4%; P=0.177) or the MELDNa estimate of 5.8% (95% CI 3.5% to 8.0%; P=0.065). Global goodness-of-fit testing found no evidence of significant lack of fit for either model (Hosmer-Lemeshow χ2 [df=3] for MELD 2.941, P=0.401; for MELDNa 2.895, P=0.414). CONCLUSION: Both the MELD and the MELDNa accurately predicted the occurrence of 90-day wait-list mortality in the study cohort and, therefore, are generalizable to Atlantic Canadians with end-stage liver disease awaiting LT. PMID:21876856

  19. The VACS Index Accurately Predicts Mortality and Treatment Response among Multi-Drug Resistant HIV Infected Patients Participating in the Options in Management with Antiretrovirals (OPTIMA) Study

    PubMed Central

    Brown, Sheldon T.; Tate, Janet P.; Kyriakides, Tassos C.; Kirkwood, Katherine A.; Holodniy, Mark; Goulet, Joseph L.; Angus, Brian J.; Cameron, D. William; Justice, Amy C.

    2014-01-01

    Objectives The VACS Index is highly predictive of all-cause mortality among HIV infected individuals within the first few years of combination antiretroviral therapy (cART). However, its accuracy among highly treatment experienced individuals and its responsiveness to treatment interventions have yet to be evaluated. We compared the accuracy and responsiveness of the VACS Index with a Restricted Index of age and traditional HIV biomarkers among patients enrolled in the OPTIMA study. Methods Using data from 324/339 (96%) patients in OPTIMA, we evaluated associations between indices and mortality using Kaplan-Meier estimates, proportional hazards models, Harrel’s C-statistic and net reclassification improvement (NRI). We also determined the association between study interventions and risk scores over time, and change in score and mortality. Results Both the Restricted Index (c = 0.70) and VACS Index (c = 0.74) predicted mortality from baseline, but discrimination was improved with the VACS Index (NRI = 23%). Change in score from baseline to 48 weeks was more strongly associated with survival for the VACS Index than the Restricted Index with respective hazard ratios of 0.26 (95% CI 0.14–0.49) and 0.39(95% CI 0.22–0.70) among the 25% most improved scores, and 2.08 (95% CI 1.27–3.38) and 1.51 (95%CI 0.90–2.53) for the 25% least improved scores. Conclusions The VACS Index predicts all-cause mortality more accurately among multi-drug resistant, treatment experienced individuals and is more responsive to changes in risk associated with treatment intervention than an index restricted to age and HIV biomarkers. The VACS Index holds promise as an intermediate outcome for intervention research. PMID:24667813

  20. Ancient numerical daemons of conceptual hydrological modeling: 2. Impact of time stepping schemes on model analysis and prediction

    NASA Astrophysics Data System (ADS)

    Kavetski, Dmitri; Clark, Martyn P.

    2010-10-01

    Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable

  1. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

    NASA Technical Reports Server (NTRS)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

  2. Multiscale Mechano-Biological Finite Element Modelling of Oncoplastic Breast Surgery-Numerical Study towards Surgical Planning and Cosmetic Outcome Prediction.

    PubMed

    Vavourakis, Vasileios; Eiben, Bjoern; Hipwell, John H; Williams, Norman R; Keshtgar, Mo; Hawkes, David J

    2016-01-01

    Surgical treatment for early-stage breast carcinoma primarily necessitates breast conserving therapy (BCT), where the tumour is removed while preserving the breast shape. To date, there have been very few attempts to develop accurate and efficient computational tools that could be used in the clinical environment for pre-operative planning and oncoplastic breast surgery assessment. Moreover, from the breast cancer research perspective, there has been very little effort to model complex mechano-biological processes involved in wound healing. We address this by providing an integrated numerical framework that can simulate the therapeutic effects of BCT over the extended period of treatment and recovery. A validated, three-dimensional, multiscale finite element procedure that simulates breast tissue deformations and physiological wound healing is presented. In the proposed methodology, a partitioned, continuum-based mathematical model for tissue recovery and angiogenesis, and breast tissue deformation is considered. The effectiveness and accuracy of the proposed numerical scheme is illustrated through patient-specific representative examples. Wound repair and contraction numerical analyses of real MRI-derived breast geometries are investigated, and the final predictions of the breast shape are validated against post-operative follow-up optical surface scans from four patients. Mean (standard deviation) breast surface distance errors in millimetres of 3.1 (±3.1), 3.2 (±2.4), 2.8 (±2.7) and 4.1 (±3.3) were obtained, demonstrating the ability of the surgical simulation tool to predict, pre-operatively, the outcome of BCT to clinically useful accuracy. PMID:27466815

  3. Multiscale Mechano-Biological Finite Element Modelling of Oncoplastic Breast Surgery—Numerical Study towards Surgical Planning and Cosmetic Outcome Prediction

    PubMed Central

    Eiben, Bjoern; Hipwell, John H.; Williams, Norman R.; Keshtgar, Mo; Hawkes, David J.

    2016-01-01

    Surgical treatment for early-stage breast carcinoma primarily necessitates breast conserving therapy (BCT), where the tumour is removed while preserving the breast shape. To date, there have been very few attempts to develop accurate and efficient computational tools that could be used in the clinical environment for pre-operative planning and oncoplastic breast surgery assessment. Moreover, from the breast cancer research perspective, there has been very little effort to model complex mechano-biological processes involved in wound healing. We address this by providing an integrated numerical framework that can simulate the therapeutic effects of BCT over the extended period of treatment and recovery. A validated, three-dimensional, multiscale finite element procedure that simulates breast tissue deformations and physiological wound healing is presented. In the proposed methodology, a partitioned, continuum-based mathematical model for tissue recovery and angiogenesis, and breast tissue deformation is considered. The effectiveness and accuracy of the proposed numerical scheme is illustrated through patient-specific representative examples. Wound repair and contraction numerical analyses of real MRI-derived breast geometries are investigated, and the final predictions of the breast shape are validated against post-operative follow-up optical surface scans from four patients. Mean (standard deviation) breast surface distance errors in millimetres of 3.1 (±3.1), 3.2 (±2.4), 2.8 (±2.7) and 4.1 (±3.3) were obtained, demonstrating the ability of the surgical simulation tool to predict, pre-operatively, the outcome of BCT to clinically useful accuracy. PMID:27466815

  4. Urban Effects in Numerical weather prediction model at Saint-Petersburg Metropolitan Area for winter

    NASA Astrophysics Data System (ADS)

    Gavrilova, Yulia; Mahura, Alexander; Smyshlaev, Sergei; Baklanov, Alexander

    2010-05-01

    In this study, the spatial and temporal variability of meteorological fields due to influence of the thermal and dynamical urban effects of the metropolitan area was estimated for St. Petersburg (Russia). Dependence of these fields on the temporal variability of meteorological variables in the lower surface layer (wind at 10 m and air temperature at 2 m fields) was estimated as a function of modified parameters - roughness, anthropogenic heat flux, and albedo. The urban modifications were made in the Interaction Soil-Biosphere-Atmosphere (ISBA) land surface scheme of the numerical weather prediction model (NWP). As NWP model a research version of the Environment - High Resolution Limited Area Model (Enviro-HIRLAM) was used in simulations. To select an urban case study for the modelling domain, the meteorological conditions during 2008-2009 were analyzed based on available archived synoptical maps, vertical sounding diagrams, ground stations observations; and several specific dates - with low and typical wind conditions - in more details. The winter period of 29 Jan - 1 Feb 2009 (characterized by dominating low wind conditions and prevailing strong deep inversion and isothermal layers extending even up to almost 700 Mb) was chosen for evaluation of the thermal and dynamical effects of the St. Petersburg metropolitan area. For selected specific dates several independent runs were performed for: (i) no modifications in scheme (control run) and (ii) modified run. In the later, the combined effects of the anthropogenic heat flux (ranging from 50 up to 200 W/m2), urban roughness parameter of 2 m, and albedo increased were included. These modifications were done only for the urban cells taking into account the urban class fractions in each cell. Due to presence of a snow cover in urban areas the albedo was also increased up to 0.65 comparing with other free snow seasons. The Enviro-HIRLAM runs were performed for 48 hour forecast length, taking into account a spin up of

  5. A review of numerical models for predicting the energy deposition and resultant thermal response of humans exposed to electromagnetic fields

    SciTech Connect

    Spiegal, R.J.

    1984-08-01

    For humans exposed to electromagnetic (EM) radiation, the resulting thermophysiologic response is not well understood. Because it is unlikely that this information will be determined from quantitative experimentation, it is necessary to develop theoretical models which predict the resultant thermal response after exposure to EM fields. These calculations are difficult and involved because the human thermoregulatory system is very complex. In this paper, the important numerical models are reviewed and possibilities for future development are discussed.

  6. Evaluation of cloud prediction and determination of critical relative humidity for a mesoscale numerical weather prediction model

    SciTech Connect

    Seaman, N.L.; Guo, Z.; Ackerman, T.P.

    1996-04-01

    Predictions of cloud occurrence and vertical location from the Pennsylvannia State University/National Center for Atmospheric Research nonhydrostatic mesoscale model (MM5) were evaluated statistically using cloud observations obtained at Coffeyville, Kansas, as part of the Second International satellite Cloud Climatology Project Regional Experiment campaign. Seventeen cases were selected for simulation during a November-December 1991 field study. MM5 was used to produce two sets of 36-km simulations, one with and one without four-dimensional data assimilation (FDDA), and a set of 12-km simulations without FDDA, but nested within the 36-km FDDA runs.

  7. Predicting sights from sounds: 6-month old infants’ intermodal numerical abilities

    PubMed Central

    Feigenson, Lisa

    2011-01-01

    Although the psychophysics of infants’ non-symbolic number representations has been well studied, less is known about other characteristics of the Approximate Number System (ANS) in young children. Here, 3 experiments explored the extent to which the ANS yields abstract representations by testing infants’ ability to transfer approximate number representations across sensory modalities. These experiments showed that 6-month old infants matched the approximate number of sounds they heard to the approximate number of sights they saw, looking longer at visual arrays that numerically mis-matched a previously heard auditory sequence. This looking preference was observed when sights and sounds mismatched by 1:3 and 1:2 ratios, but not by a 2:3 ratio. These findings suggest that infants can compare numerical information obtained in different modalities using representations stored in memory. Furthermore, the acuity of 6-month old infants’ comparisons of intermodal numerical sequences appears to parallel that of their comparisons of unimodal sequences. PMID:21616502

  8. Deep vein thrombosis is accurately predicted by comprehensive analysis of the levels of microRNA-96 and plasma D-dimer

    PubMed Central

    Xie, Xuesheng; Liu, Changpeng; Lin, Wei; Zhan, Baoming; Dong, Changjun; Song, Zhen; Wang, Shilei; Qi, Yingguo; Wang, Jiali; Gu, Zengquan

    2016-01-01

    The aim of the present study was to investigate the association between platelet microRNA-96 (miR-96) expression levels and the occurrence of deep vein thrombosis (DVT) in orthopedic patients. A total of consecutive 69 orthopedic patients with DVT and 30 healthy individuals were enrolled. Ultrasonic color Doppler imaging was performed on lower limb veins after orthopedic surgery to determine the occurrence of DVT. An enzyme-linked fluorescent assay was performed to detect the levels of D-dimer in plasma. A quantitative polymerase chain reaction assay was performed to determine the expression levels of miR-96. Expression levels of platelet miR-96 were significantly increased in orthopedic patients after orthopedic surgery. miR-96 expression levels in orthopedic patients with DVT at days 1, 3 and 7 after orthopedic surgery were significantly increased when compared with those in the control group. The increased miR-96 expression levels were correlated with plasma D-dimer levels in orthopedic patients with DVT. However, for the orthopedic patients in the non-DVT group following surgery, miR-96 expression levels were correlated with plasma D-dimer levels. In summary, the present results suggest that the expression levels of miR-96 may be associated with the occurrence of DVT. The occurrence of DVT may be accurately predicted by comprehensive analysis of the levels of miR-96 and plasma D-dimer. PMID:27588107

  9. Numerical Order Processing in Children: From Reversing the Distance-Effect to Predicting Arithmetic

    ERIC Educational Resources Information Center

    Lyons, Ian M.; Ansari, Daniel

    2015-01-01

    Recent work has demonstrated that how we process the relative order--ordinality--of numbers may be key to understanding how we represent numbers symbolically, and has proven to be a robust predictor of more sophisticated math skills in both children and adults. However, it remains unclear whether numerical ordinality is primarily a by-product of…

  10. Inter-Parietal White Matter Development Predicts Numerical Performance in Young Children

    ERIC Educational Resources Information Center

    Cantlon, Jessica F.; Davis, Simon W.; Libertus, Melissa E.; Kahane, Jill; Brannon, Elizabeth M.; Pelphrey, Kevin A.

    2011-01-01

    In an effort to understand the role of interhemispheric transfer in numerical development, we investigated the relationship between children's developing knowledge of numbers and the integrity of their white matter connections between the cerebral hemispheres (the corpus callosum). We used diffusion tensor imaging (DTI) tractography analyses to…

  11. Numerical and experimental predictions of fine-soil erosion, transport and trapping in embankment dam

    NASA Astrophysics Data System (ADS)

    Kanarska, Y.; Lomov, I.; Ezzedine, S. M.; Antoun, T. H.; Glascoe, L. G.

    2011-12-01

    A determination of the safety of dam structures requires the characterization of fine-soil erosion processes and the ability of filter layers to capture fine-soil particles to prevent dam failure. We investigated numerically and experimentally different aspects of this problem at a grain scale. The numerical method was based on Lagrange multiplier technique (Kanarska et al., 2011). The particle-particle interactions were implemented using explicit force-displacement interactions for frictional inelastic particles similar to the distinct element method (DEM) (Cundall and Strack, 1979), with some modifications using the volume of the overlapping region as the input to the contact forces. The first set of numerical tests was performed to describe the response of a granular bed to forcing by a fluid, which flows over the crack surface. We investigated how particle properties, such as size and shape, affect threshold values for critical shear stresses and mean velocities. A good agreement between numerical results and experiments was found. A general constitutive erosion law, critical shear stresses, and erosion velocities were derived and validated against the available experimental range of conditions for different particle sizes, particle shapes, and flow conditions. We confirmed that a linear relationship between particle mass fluxes and shear stresses well describes soil behavior. A second set of numerical and experimental tests to investigate sediment trapping in the filter layers was also performed. The laboratory experiments on soil transport and trapping in granular media were conducted in constant-head flow chamber filled with filter media. We investigated how particle properties and amplitude of the applied hydraulic gradient affect clogging criteria and changes in hydraulic conductivity of the medium. The numerical results were validated against available experimental data. We started with spherical particles. In the future, we are planning to investigate

  12. Numerical prediction of kinetic model for enzymatic hydrolysis of cellulose using DAE-QMOM approach

    NASA Astrophysics Data System (ADS)

    Jamil, N. M.; Wang, Q.

    2016-06-01

    Bioethanol production from lignocellulosic biomass consists of three fundamental processes; pre-treatment, enzymatic hydrolysis, and fermentation. In enzymatic hydrolysis phase, the enzymes break the cellulose chains into sugar in the form of cellobiose or glucose. A currently proposed kinetic model for enzymatic hydrolysis of cellulose that uses population balance equation (PBE) mechanism was studied. The complexity of the model due to integrodifferential equations makes it difficult to find the analytical solution. Therefore, we solved the full model of PBE numerically by using DAE-QMOM approach. The computation was carried out using MATLAB software. The numerical results were compared to the asymptotic solution developed in the author's previous paper and the results of Griggs et al. Besides confirming the findings were consistent with those references, some significant characteristics were also captured. The PBE model for enzymatic hydrolysis process can be solved using DAE-QMOM method. Also, an improved understanding of the physical insights of the model was achieved.

  13. Optimal interpolation and the Kalman filter. [for analysis of numerical weather predictions

    NASA Technical Reports Server (NTRS)

    Cohn, S.; Isaacson, E.; Ghil, M.

    1981-01-01

    The estimation theory of stochastic-dynamic systems is described and used in a numerical study of optimal interpolation. The general form of data assimilation methods is reviewed. The Kalman-Bucy, KB filter, and optimal interpolation (OI) filters are examined for effectiveness in performance as gain matrices using a one-dimensional form of the shallow-water equations. Control runs in the numerical analyses were performed for a ten-day forecast in concert with the OI method. The effects of optimality, initialization, and assimilation were studied. It was found that correct initialization is necessary in order to localize errors, especially near boundary points. Also, the use of small forecast error growth rates over data-sparse areas was determined to offset inaccurate modeling of correlation functions near boundaries.

  14. Numerical verification of an analytical model predicting the modal crosscorrelation coefficient

    SciTech Connect

    Roussel, G.; Cuvelliez, C.

    1996-12-01

    In the seismic analysis of linear systems using the Response Spectrum Method, the most probable maximum response is usually given by the double sum equation. The contribution of the crosscorrelations to the response has been investigated by various authors. An analytical method was previously developed by one of the authors aiming at defining an expression of the crosscorrelation coefficient taking into account the limited duration of the seismic excitation. Numerical verification is here performed to test the accuracy of the proposed analytical method.

  15. Surface pressure profiles, vortex structure and initialization for hurricane prediction. Part II: numerical simulations of track, structure and intensity

    NASA Astrophysics Data System (ADS)

    Davidson, Noel E.; Ma, Yimin

    2012-07-01

    In part 1 of this study, an assessment of commonly used surface pressure profiles to represent TC structures was made. Using the Australian tropical cyclone model, the profiles are tested in case studies of high-resolution prediction of track, structure and intensity. We demonstrate that: (1) track forecasts are mostly insensitive to the imposed structure; (2) in some cases [here Katrina (2005)], specification of vortex structure can have a large impact on prediction of structure and intensity; (3) the forecast model mostly preserves the characteristics of the initial structure and so correct structure at t = 0 is a requirement for improved structure forecasting; and (4) skilful prediction of intensity does not guarantee skilful prediction of structure. It is shown that for Ivan (2004) the initial structure from each profile is preserved during the simulations, and that markedly different structures can have similar intensities. Evidence presented suggests that different initial profiles can sometimes change the timing of intensification. Thus, correct initial vortex structure is an essential ingredient for more accurate intensity and structure prediction.

  16. Verification of precipitation forecasts from two numerical weather prediction models in the Middle Atlantic Region of the USA: A precursory analysis to hydrologic forecasting

    NASA Astrophysics Data System (ADS)

    Siddique, Ridwan; Mejia, Alfonso; Brown, James; Reed, Seann; Ahnert, Peter

    2015-10-01

    Accurate precipitation forecasts are required for accurate flood forecasting. The structures of different precipitation forecasting systems are constantly evolving, with improvements in forecasting techniques, increases in spatial and temporal resolution, improvements in model physics and numerical techniques, and better understanding of, and accounting for, predictive uncertainty. Hence, routine verification is necessary to understand the quality of forecasts as inputs to hydrologic modeling. In this study, we verify precipitation forecasts from the National Centers for Environmental Prediction (NCEP) 11-member Global Ensemble Forecast System Reforecast version 2 (GEFSRv2), as well as the 21-member Short Range Ensemble Forecast (SREF) system. Specifically, basin averaged precipitation forecasts are verified for different basin sizes (spatial scales) in the operating domain of the Middle Atlantic River Forecast Center (MARFC), using multi-sensor precipitation estimates (MPEs) as the observed data. The quality of the ensemble forecasts is evaluated conditionally upon precipitation amounts, forecast lead times, accumulation periods, and seasonality using different verification metrics. Overall, both GEFSRv2 and SREF tend to overforecast light to moderate precipitation and underforecast heavy precipitation. In addition, precipitation forecasts from both systems become increasingly reliable with increasing basin size and decreasing precipitation threshold, and the 24-hourly forecasts show slightly better skill than the 6-hourly forecasts. Both systems show a strong seasonal trend, characterized by better skill during the cool season than the warm season. Ultimately, the verification results lead to guidance on the expected quality of the precipitation forecasts, together with an assessment of their relative quality and unique information content, which is useful and necessary for their application in hydrologic forecasting.

  17. Improving Numerical Weather Predictions of Summertime Precipitation Over the Southeastern U.S. Through a High-Resolution Initialization of the Surface State

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Kumar, Sujay V.; Krikishen, Jayanthi; Jedlovec, Gary J.

    2011-01-01

    It is hypothesized that high-resolution, accurate representations of surface properties such as soil moisture and sea surface temperature are necessary to improve simulations of summertime pulse-type convective precipitation in high resolution models. This paper presents model verification results of a case study period from June-August 2008 over the Southeastern U.S. using the Weather Research and Forecasting numerical weather prediction model. Experimental simulations initialized with high-resolution land surface fields from the NASA Land Information System (LIS) and sea surface temperature (SST) derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) are compared to a set of control simulations initialized with interpolated fields from the National Centers for Environmental Prediction 12-km North American Mesoscale model. The LIS land surface and MODIS SSTs provide a more detailed surface initialization at a resolution comparable to the 4-km model grid spacing. Soil moisture from the LIS spin-up run is shown to respond better to the extreme rainfall of Tropical Storm Fay in August 2008 over the Florida peninsula. The LIS has slightly lower errors and higher anomaly correlations in the top soil layer, but exhibits a stronger dry bias in the root zone. The model sensitivity to the alternative surface initial conditions is examined for a sample case, showing that the LIS/MODIS data substantially impact surface and boundary layer properties.

  18. Numerical Predictions of Sonic Boom Signatures for a Straight Line Segmented Leading Edge Model

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa A.; Wilcox, Floyd J.; Cliff, Susan; Thomas, Scott

    2012-01-01

    A sonic boom wind tunnel test was conducted on a straight-line segmented leading edge (SLSLE) model in the NASA Langley 4- by 4- Foot Unitary Plan Wind Tunnel (UPWT). The purpose of the test was to determine whether accurate sonic boom measurements could be obtained while continuously moving the SLSLE model past a conical pressure probe. Sonic boom signatures were also obtained using the conventional move-pause data acquisition method for comparison. The continuous data acquisition approach allows for accurate signatures approximately 15 times faster than a move-pause technique. These successful results provide an incentive for future testing with greatly increased efficiency using the continuous model translation technique with the single probe to measure sonic boom signatures. Two widely used NASA codes, USM3D (Navier-Stokes) and CART3D-AERO (Euler, adjoint-based adaptive mesh), were used to compute off-body sonic boom pressure signatures of the SLSLE model at several different altitudes below the model at Mach 2.0. The computed pressure signatures compared well with wind tunnel data. The effect of the different altitude for signature extraction was evaluated by extrapolating the near field signatures to the ground and comparing pressure signatures and sonic boom loudness levels.

  19. Accurate Prediction of Hyperfine Coupling Constants in Muoniated and Hydrogenated Ethyl Radicals: Ab Initio Path Integral Simulation Study with Density Functional Theory Method.

    PubMed

    Yamada, Kenta; Kawashima, Yukio; Tachikawa, Masanori

    2014-05-13

    We performed ab initio path integral molecular dynamics (PIMD) simulations with a density functional theory (DFT) method to accurately predict hyperfine coupling constants (HFCCs) in the ethyl radical (CβH3-CαH2) and its Mu-substituted (muoniated) compound (CβH2Mu-CαH2). The substitution of a Mu atom, an ultralight isotope of the H atom, with larger nuclear quantum effect is expected to strongly affect the nature of the ethyl radical. The static conventional DFT calculations of CβH3-CαH2 find that the elongation of one Cβ-H bond causes a change in the shape of potential energy curve along the rotational angle via the imbalance of attractive and repulsive interactions between the methyl and methylene groups. Investigation of the methyl-group behavior including the nuclear quantum and thermal effects shows that an unbalanced CβH2Mu group with the elongated Cβ-Mu bond rotates around the Cβ-Cα bond in a muoniated ethyl radical, quite differently from the CβH3 group with the three equivalent Cβ-H bonds in the ethyl radical. These rotations couple with other molecular motions such as the methylene-group rocking motion (inversion), leading to difficulties in reproducing the corresponding barrier heights. Our PIMD simulations successfully predict the barrier heights to be close to the experimental values and provide a significant improvement in muon and proton HFCCs given by the static conventional DFT method. Further investigation reveals that the Cβ-Mu/H stretching motion, methyl-group rotation, methylene-group rocking motion, and HFCC values deeply intertwine with each other. Because these motions are different between the radicals, a proper description of the structural fluctuations reflecting the nuclear quantum and thermal effects is vital to evaluate HFCC values in theory to be comparable to the experimental ones. Accordingly, a fundamental difference in HFCC between the radicals arises from their intrinsic molecular motions at a finite temperature, in

  20. Verification of Precipitation Forecasts from Two Numerical Weather Prediction Models for the Middle- and North-Eastern Region of the USA

    NASA Astrophysics Data System (ADS)

    Siddique, R.; Brown, J.; Reed, S. M.; Mejia, A.

    2014-12-01

    Accurate precipitation and temperature forecasts are the pre-requisites to produce skillful flood forecasts. The structures of different precipitation forecasting systems are constantly evolving, for example, with improvements in the forecasting techniques, increases in spatial and temporal resolution, improvements in model physics and numerical techniques, and better understanding of uncertainty. Hence, routine verification is necessary to understand the quality of forecasts at particular times and locations, and as inputs to hydrologic modeling. Hydrologic forecasters in the National Weather Service are evaluating precipitation and temperature forecasts from a wide range of numerical prediction models to improve the streamflow forecasts. To assist in this effort, our goal here is to verify the operational precipitation forecasts from the National Centers for Environmental Prediction 21-member Short Range Ensemble Forecast (SREF) system, together with precipitation reforecasts from the 11-member Global Ensemble Forecast System (GEFS). The verification is done for the middle- and north-eastern region of the United States and for mean areal precipitation forecasts conditioned on precipitation amounts, lead times, seasonality, and accumulation periods. Multi-sensor precipitation estimates are used as observed data. The effect of different basin sizes on forecast quality is also studied by simply choosing areal extents of varying sizes. Although flood forecasting is the main context of this study, separate analyses are presented for moderate and large precipitation events with a view towards providing additional information to forecasters. The summary of verification statistics indicates similar forecasting performance for both SREF and GEFS reforecasts even though GEFS reforecasts are valid for much longer lead times and coarser grid resolution. Precipitation forecasts from both of these models show greater skill in large basins than in relatively smaller ones

  1. Planning Irreversible Electroporation in the Porcine Kidney: Are Numerical Simulations Reliable for Predicting Empiric Ablation Outcomes?

    SciTech Connect

    Wimmer, Thomas Srimathveeravalli, Govindarajan; Gutta, Narendra; Ezell, Paula C.; Monette, Sebastien; Maybody, Majid; Erinjery, Joseph P.; Durack, Jeremy C.; Coleman, Jonathan A.; Solomon, Stephen B.

    2015-02-15

    PurposeNumerical simulations are used for treatment planning in clinical applications of irreversible electroporation (IRE) to determine ablation size and shape. To assess the reliability of simulations for treatment planning, we compared simulation results with empiric outcomes of renal IRE using computed tomography (CT) and histology in an animal model.MethodsThe ablation size and shape for six different IRE parameter sets (70–90 pulses, 2,000–2,700 V, 70–100 µs) for monopolar and bipolar electrodes was simulated using a numerical model. Employing these treatment parameters, 35 CT-guided IRE ablations were created in both kidneys of six pigs and followed up with CT immediately and after 24 h. Histopathology was analyzed from postablation day 1.ResultsAblation zones on CT measured 81 ± 18 % (day 0, p ≤ 0.05) and 115 ± 18 % (day 1, p ≤ 0.09) of the simulated size for monopolar electrodes, and 190 ± 33 % (day 0, p ≤ 0.001) and 234 ± 12 % (day 1, p ≤ 0.0001) for bipolar electrodes. Histopathology indicated smaller ablation zones than simulated (71 ± 41 %, p ≤ 0.047) and measured on CT (47 ± 16 %, p ≤ 0.005) with complete ablation of kidney parenchyma within the central zone and incomplete ablation in the periphery.ConclusionBoth numerical simulations for planning renal IRE and CT measurements may overestimate the size of ablation compared to histology, and ablation effects may be incomplete in the periphery.

  2. Operational Numerical Weather Prediction at the Met Office and potential ways forward for operational space weather prediction systems

    NASA Astrophysics Data System (ADS)

    Jackson, David

    -wind, magnetosphere and ionosphere. The three simulations are directly or indirectly connected each other based on real-time observa-tion data to reproduce a virtual geo-space region on the super-computer. Informatics is a new methodology to make precise forecast of space weather. Based on new information and communication technologies (ICT), it provides more information in both quality and quantity. At NICT, we have been developing a cloud-computing system named "space weather cloud" based on a high-speed network system (JGN2+). Huge-scale distributed storage (1PB), clus-ter computers, visualization systems and other resources are expected to derive new findings and services of space weather forecasting. The final goal of NICT space weather service is to predict near-future space weather conditions and disturbances which will be causes of satellite malfunctions, tele-communication problems, and error of GPS navigations. In the present talk, we introduce our recent activities on the space weather services and discuss how we are going to develop the services from the view points of space science and practical uses.

  3. Experimental Observations and Numerical Prediction of Induction Heating in a Graphite Test Article

    SciTech Connect

    Jankowski, Todd A; Johnson, Debra P; Jurney, James D; Freer, Jerry E; Dougherty, Lisa M; Stout, Stephen A

    2009-01-01

    The induction heating coils used in the plutonium casting furnaces at the Los Alamos National Laboratory are studied here. A cylindrical graphite test article has been built, instrumented with thermocouples, and heated in the induction coil that is normally used to preheat the molds during casting operations. Preliminary results of experiments aimed at understanding the induction heating process in the mold portion of the furnaces are reported. The experiments have been modeled in COMSOL Multiphysics and the numerical and experimental results are compared to one another. These comparisons provide insight into the heating process and provide a benchmark for COMSOL calculations of induction heating in the mold portion of the plutonium casting furnaces.

  4. Predictive Modeling of Chemical Hazard by Integrating Numerical Descriptors of Chemical Structures and Short-term Toxicity Assay Data

    PubMed Central

    Rusyn, Ivan; Sedykh, Alexander; Guyton, Kathryn Z.; Tropsha, Alexander

    2012-01-01

    Quantitative structure-activity relationship (QSAR) models are widely used for in silico prediction of in vivo toxicity of drug candidates or environmental chemicals, adding value to candidate selection in drug development or in a search for less hazardous and more sustainable alternatives for chemicals in commerce. The development of traditional QSAR models is enabled by numerical descriptors representing the inherent chemical properties that can be easily defined for any number of molecules; however, traditional QSAR models often have limited predictive power due to the lack of data and complexity of in vivo endpoints. Although it has been indeed difficult to obtain experimentally derived toxicity data on a large number of chemicals in the past, the results of quantitative in vitro screening of thousands of environmental chemicals in hundreds of experimental systems are now available and continue to accumulate. In addition, publicly accessible toxicogenomics data collected on hundreds of chemicals provide another dimension of molecular information that is potentially useful for predictive toxicity modeling. These new characteristics of molecular bioactivity arising from short-term biological assays, i.e., in vitro screening and/or in vivo toxicogenomics data can now be exploited in combination with chemical structural information to generate hybrid QSAR–like quantitative models to predict human toxicity and carcinogenicity. Using several case studies, we illustrate the benefits of a hybrid modeling approach, namely improvements in the accuracy of models, enhanced interpretation of the most predictive features, and expanded applicability domain for wider chemical space coverage. PMID:22387746

  5. Numerical prediction of turbulent flame stability in premixed/prevaporized (HSCT) combustors

    NASA Technical Reports Server (NTRS)

    Winowich, Nicholas S.

    1990-01-01

    A numerical analysis of combustion instabilities that induce flashback in a lean, premixed, prevaporized dump combustor is performed. KIVA-II, a finite volume CFD code for the modeling of transient, multidimensional, chemically reactive flows, serves as the principal analytical tool. The experiment of Proctor and T'ien is used as a reference for developing the computational model. An experimentally derived combustion instability mechanism is presented on the basis of the observations of Proctor and T'ien and other investigators of instabilities in low speed (M less than 0.1) dump combustors. The analysis comprises two independent procedures that begin from a calculated stable flame: The first is a linear increase of the equivalence ratio and the second is the linear decrease of the inflow velocity. The objective is to observe changes in the aerothermochemical features of the flow field prior to flashback. It was found that only the linear increase of the equivalence ratio elicits a calculated flashback result. Though this result did not exhibit large scale coherent vortices in the turbulent shear layer coincident with a flame flickering mode as was observed experimentally, there were interesting acoustic effects which were resolved quite well in the calculation. A discussion of the k-e turbulence model used by KIVA-II is prompted by the absence of combustion instabilities in the model as the inflow velocity is linearly decreased. Finally, recommendations are made for further numerical analysis that may improve correlation with experimentally observed combustion instabilities.

  6. Numerical Predictions on the Final Properties of Metal Injection Moulded Components after Sintering Process

    SciTech Connect

    Song, J.; Barriere, T.; Gelin, J. C.

    2007-04-07

    A macroscopic model based on a viscoplastic constitutive law is presented to describe the sintering process of metallic powder components obtained by injection moulding. The model parameters are identified by the gravitational beam-bending tests in sintering and the sintering experiments in dilatometer. The finite element simulations are carried out to predict the shrinkage, density and strength after sintering. The simulation results have been compared to the experimental ones, and a good agreement has been obtained.

  7. Seamless Meteorology-Chemistry Modelling: Status and Relevance for Numerical Weather Prediction, Air Quality and Climate Research

    NASA Astrophysics Data System (ADS)

    Baklanov, Alexander; EuMetChem Team

    2015-04-01

    Online coupled meteorology atmospheric chemistry models have undergone a rapid evolution in recent years. Although mainly developed by the air quality modelling community, these models are also of interest for numerical weather prediction and climate modelling as they can consider not only the effects of meteorology on air quality, but also the potentially important effects of atmospheric composition on weather. Two ways of online coupling can be distinguished: online integrated and online access coupling. Online integrated models simulate meteorology and chemistry over the same grid in one model using one main timestep for integration. Online access models use independent meteorology and chemistry modules that might even have different grids, but exchange meteorology and chemistry data on a regular and frequent basis. This paper is an overall outcome of the European COST Action ES1004: European Framework for Online Integrated Air Quality and Meteorology Modelling (EuMetChem) and conclusions from the recently organized Symposium on Coupled Chemistry-Meteorology/Climate Modelling: Status and Relevance for Numerical Weather Prediction, Air Quality and Climate Research. It offers a review of the current research status of online coupled meteorology and atmospheric chemistry modelling, a survey of processes relevant to the interactions between atmospheric physics, dynamics and composition; and highlights selected scientific issues and emerging challenges that require proper consideration to improve the reliability and usability of these models for the three scientific communities: air quality, numerical meteorology modelling (including weather prediction) and climate modelling. It presents a synthesis of scientific progress and provides recommendations for future research directions and priorities in the development, application and evaluation of online coupled models.

  8. Numerical prediction of the Mid-Atlantic states cyclone of 18-19 February 1979

    NASA Technical Reports Server (NTRS)

    Atlas, R.; Rosenberg, R.

    1982-01-01

    A series of forecast experiments was conducted to assess the accuracy of the GLAS model, and to determine the importance of large scale dynamical processes and diabatic heating to the cyclogenesis. The GLAS model correctly predicted intense coastal cyclogenesis and heavy precipitation. Repeated without surface heat and moisture fluxes, the model failed to predict any cyclone development. An extended range forecast, a forecast from the NMC analysis interpolated to the GLAS grid, and a forecast from the GLAS analysis with the surface moisture flux excluded predicted weak coastal low development. Diabatic heating resulting from oceanic fluxes significantly contributed to the generation of low level cyclonic vorticity and the intensification and slow rate of movement of an upper level ridge over the western Atlantic. As an upper level short wave trough approached this ridge, diabatic heating associated with the release of latent heat intensified, and the gradient of vorticity, vorticity advection and upper level divergence in advance of the trough were greatly increased, providing strong large scale forcing for the surface cyclogenesis.

  9. Numerical prediction of micro-channel LD heat sink operated with antifreeze based on CFD method

    NASA Astrophysics Data System (ADS)

    Liu, Gang; Liu, Yang; Wang, Chao; Wang, Wentao; Wang, Gang; Tang, Xiaojun

    2014-12-01

    To theoretically study the feasibility of antifreeze coolants applied as cooling fluids for high power LD heat sink, detailed Computational Fluid Dynamics (CFD) analysis of liquid cooled micro-channels heat sinks is presented. The performance operated with antifreeze coolant (ethylene glycol aqueous solution) compared with pure water are numerical calculated for the heat sinks with the same micro-channels structures. The maximum thermal resistance, total pressure loss (flow resistance), thermal resistance vs. flow-rate, and pressure loss vs. flow-rate etc. characteristics are numerical calculated. The results indicate that the type and temperature of coolants plays an important role on the performance of heat sinks. The whole thermal resistance and pressure loss of heat sinks increase significantly with antifreeze coolants compared with pure water mainly due to its relatively lower thermal conductivity and higher fluid viscosity. The thermal resistance and pressure loss are functions of the flow rate and operation temperature. Increasing of the coolant flow rate can reduce the thermal resistance of heat sinks; meanwhile increase the pressure loss significantly. The thermal resistance tends to a limit with increasing flow rate, while the pressure loss tends to increase exponentially with increasing flow rate. Low operation temperature chiefly increases the pressure loss rather than thermal resistance due to the remarkable increasing of fluid viscosity. The actual working point of the cooling circulation system can be determined on the basis of the pressure drop vs. flow rate curve for the micro-channel heat sink and that for the circulation system. In the same system, if the type or/and temperature of the coolant is changed, the working point is accordingly influenced, that is, working flow rate and pressure is changed simultaneously, due to which the heat sink performance is influenced. According to the numerical simulation results, if ethylene glycol aqueous

  10. Numerical predictions for laminar source-sink flow in a rotating cylindrical cavity

    NASA Astrophysics Data System (ADS)

    Chew, J. W.; Owen, J. M.; Pincombe, J. R.

    1984-06-01

    Numerical solutions are presented for steady, axisymmetric, laminar, isothermal, source-sink flow in a rotating cylindrical cavity. These results, which are in good agreement with previously published experimental work, have been used to give a fresh insight into the nature of the flow and to investigate the validity of other theoretical solutions. When the fluid enters the cavity through a central uniform radial source and leaves through an outer sink, it is shown that the flow near the disks can be approximated by two known analytical solutions. If the radial source is replaced by an axial inlet, the flow becomes more complex, with a wall jet forming on the downstream disk at sufficiently high flow rates.

  11. Numerical Modeling Tools for the Prediction of Solution Migration Applicable to Mining Site

    SciTech Connect

    Martell, M.; Vaughn, P.

    1999-01-06

    Mining has always had an important influence on cultures and traditions of communities around the globe and throughout history. Today, because mining legislation places heavy emphasis on environmental protection, there is great interest in having a comprehensive understanding of ancient mining and mining sites. Multi-disciplinary approaches (i.e., Pb isotopes as tracers) are being used to explore the distribution of metals in natural environments. Another successful approach is to model solution migration numerically. A proven method to simulate solution migration in natural rock salt has been applied to project through time for 10,000 years the system performance and solution concentrations surrounding a proposed nuclear waste repository. This capability is readily adaptable to simulate solution migration around mining.

  12. Numerical prediction of algae cell mixing feature in raceway ponds using particle tracing methods.

    PubMed

    Ali, Haider; Cheema, Taqi A; Yoon, Ho-Sung; Do, Younghae; Park, Cheol W

    2015-02-01

    In the present study, a novel technique, which involves numerical computation of the mixing length of algae particles in raceway ponds, was used to evaluate the mixing process. A value of mixing length that is higher than the maximum streamwise distance (MSD) of algae cells indicates that the cells experienced an adequate turbulent mixing in the pond. A coupling methodology was adapted to map the pulsating effects of a 2D paddle wheel on a 3D raceway pond in this study. The turbulent mixing was examined based on the computations of mixing length, residence time, and algae cell distribution in the pond. The results revealed that the use of particle tracing methodology is an improved approach to define the mixing phenomenon more effectively. Moreover, the algae cell distribution aided in identifying the degree of mixing in terms of mixing length and residence time. PMID:25163842

  13. Analyses of Global Monthly Precipitation Using Gauge Observations, Satellite Estimates, and Numerical Model Predictions.

    NASA Astrophysics Data System (ADS)

    Xie, Pingping; Arkin, Phillip A.

    1996-04-01

    An algorithm is developed to construct global gridded fields of monthly precipitation by merging estimates from five sources of information with different characteristics, including gauge-based monthly analyses from the Global Precipitation Climatology Centre, three types of satellite estimates [the infrared-based GOES Precipitation Index, the microwave (MW) scattering-based Grody, and the MW emission-based Chang estimates], and predictions produced by the operational forecast model of the European Centre for Medium-Range Weather Forecasts. A two-step strategy is used to: 1) reduce the random error found in the individual sources and 2) reduce the bias of the combined analysis. First, the three satellite-based estimates and the model predictions are combined linearly based on a maximum likelihood estimate, in which the weighting coefficients are inversely proportional to the squares of the individual random errors determined by comparison with gauge observations and subjective assumptions. This combined analysis is then blended with an analysis based on gauge observations using a method that presumes that the bias of the gauge-based field is small where sufficient gauges are availab