Can numerical simulations accurately predict hydrodynamic instabilities in liquid films?
NASA Astrophysics Data System (ADS)
Denner, Fabian; Charogiannis, Alexandros; Pradas, Marc; van Wachem, Berend G. M.; Markides, Christos N.; Kalliadasis, Serafim
2014-11-01
Understanding the dynamics of hydrodynamic instabilities in liquid film flows is an active field of research in fluid dynamics and non-linear science in general. Numerical simulations offer a powerful tool to study hydrodynamic instabilities in film flows and can provide deep insights into the underlying physical phenomena. However, the direct comparison of numerical results and experimental results is often hampered by several reasons. For instance, in numerical simulations the interface representation is problematic and the governing equations and boundary conditions may be oversimplified, whereas in experiments it is often difficult to extract accurate information on the fluid and its behavior, e.g. determine the fluid properties when the liquid contains particles for PIV measurements. In this contribution we present the latest results of our on-going, extensive study on hydrodynamic instabilities in liquid film flows, which includes direct numerical simulations, low-dimensional modelling as well as experiments. The major focus is on wave regimes, wave height and wave celerity as a function of Reynolds number and forcing frequency of a falling liquid film. Specific attention is paid to the differences in numerical and experimental results and the reasons for these differences. The authors are grateful to the EPSRC for their financial support (Grant EP/K008595/1).
Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A
2015-09-18
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).
Fast and accurate numerical method for predicting gas chromatography retention time.
Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira
2015-08-07
Predictive modeling for gas chromatography compound retention depends on the retention factor (ki) and on the flow of the mobile phase. Thus, different approaches for determining an analyte ki in column chromatography have been developed. The main one is based on the thermodynamic properties of the component and on the characteristics of the stationary phase. These models can be used to estimate the parameters and to optimize the programming of temperatures, in gas chromatography, for the separation of compounds. Different authors have proposed the use of numerical methods for solving these models, but these methods demand greater computational time. Hence, a new method for solving the predictive modeling of analyte retention time is presented. This algorithm is an alternative to traditional methods because it transforms its attainments into root determination problems within defined intervals. The proposed approach allows for tr calculation, with accuracy determined by the user of the methods, and significant reductions in computational time; it can also be used to evaluate the performance of other prediction methods.
On numerically accurate finite element
NASA Technical Reports Server (NTRS)
Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.
1974-01-01
A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.
New model accurately predicts reformate composition
Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )
1994-01-31
Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.
NASA Technical Reports Server (NTRS)
Graves, R. A., Jr.
1975-01-01
The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.
Time-Accurate Numerical Simulations of Synthetic Jet Quiescent Air
NASA Technical Reports Server (NTRS)
Rupesh, K-A. B.; Ravi, B. R.; Mittal, R.; Raju, R.; Gallas, Q.; Cattafesta, L.
2007-01-01
The unsteady evolution of three-dimensional synthetic jet into quiescent air is studied by time-accurate numerical simulations using a second-order accurate mixed explicit-implicit fractional step scheme on Cartesian grids. Both two-dimensional and three-dimensional calculations of synthetic jet are carried out at a Reynolds number (based on average velocity during the discharge phase of the cycle V(sub j), and jet width d) of 750 and Stokes number of 17.02. The results obtained are assessed against PIV and hotwire measurements provided for the NASA LaRC workshop on CFD validation of synthetic jets.
A gene expression biomarker accurately predicts estrogen ...
The EPA’s vision for the Endocrine Disruptor Screening Program (EDSP) in the 21st Century (EDSP21) includes utilization of high-throughput screening (HTS) assays coupled with computational modeling to prioritize chemicals with the goal of eventually replacing current Tier 1 screening tests. The ToxCast program currently includes 18 HTS in vitro assays that evaluate the ability of chemicals to modulate estrogen receptor α (ERα), an important endocrine target. We propose microarray-based gene expression profiling as a complementary approach to predict ERα modulation and have developed computational methods to identify ERα modulators in an existing database of whole-genome microarray data. The ERα biomarker consisted of 46 ERα-regulated genes with consistent expression patterns across 7 known ER agonists and 3 known ER antagonists. The biomarker was evaluated as a predictive tool using the fold-change rank-based Running Fisher algorithm by comparison to annotated gene expression data sets from experiments in MCF-7 cells. Using 141 comparisons from chemical- and hormone-treated cells, the biomarker gave a balanced accuracy for prediction of ERα activation or suppression of 94% or 93%, respectively. The biomarker was able to correctly classify 18 out of 21 (86%) OECD ER reference chemicals including “very weak” agonists and replicated predictions based on 18 in vitro ER-associated HTS assays. For 114 chemicals present in both the HTS data and the MCF-7 c
You Can Accurately Predict Land Acquisition Costs.
ERIC Educational Resources Information Center
Garrigan, Richard
1967-01-01
Land acquisition costs were tested for predictability based upon the 1962 assessed valuations of privately held land acquired for campus expansion by the University of Wisconsin from 1963-1965. By correlating the land acquisition costs of 108 properties acquired during the 3 year period with--(1) the assessed value of the land, (2) the assessed…
Towards more accurate vegetation mortality predictions
Sevanto, Sanna Annika; Xu, Chonggang
2016-09-26
Predicting the fate of vegetation under changing climate is one of the major challenges of the climate modeling community. Here, terrestrial vegetation dominates the carbon and water cycles over land areas, and dramatic changes in vegetation cover resulting from stressful environmental conditions such as drought feed directly back to local and regional climate, potentially leading to a vicious cycle where vegetation recovery after a disturbance is delayed or impossible.
Numerical prediction of airplane trailing vortices
NASA Astrophysics Data System (ADS)
Czech, M. J.; Crouch, J. D.; Miller, G. D.; Strelets, M.
2004-11-01
The accurate prediction of airplane trailing vortices is of great interest for both cruise conditions in conjunction with the formation of contrails as well as approach conditions for reasons of flight safety and active vortex control. A numerical approach is introduced based on a quasi-3D Reynolds-Averaged Navier-Stokes formulation with a one-equation turbulence model. The numerical results show good agreement with wind-tunnel data out to ten spans for a range of wing and tail loadings typical of commercial airplanes in a landing configuration. The results show a one-, two- and three-pair system in the near-field with only minor changes to the initial lift distribution. The CFD correctly predicts the strength, demise and position of the individual vortex pairs over a range of test cases. The approach is further extended by considering thrust effects. For cruise conditions, far field predictions show the entrainment of the jet plume into the wake and provide the potential for coupling with a micro-physics model to predict the formation and early evolution of contrails. Potential influences of configuration details on the plume entrainment are considered. This numerical method also offers an attractive approach for assessing active schemes designed to accelerate the break-up of airplane trailing vortices.
A predictable and accurate technique with elastomeric impression materials.
Barghi, N; Ontiveros, J C
1999-08-01
A method for obtaining more predictable and accurate final impressions with polyvinylsiloxane impression materials in conjunction with stock trays is proposed and tested. Heavy impression material is used in advance for construction of a modified custom tray, while extra-light material is used for obtaining a more accurate final impression.
Accurate torque-speed performance prediction for brushless dc motors
NASA Astrophysics Data System (ADS)
Gipper, Patrick D.
Desirable characteristics of the brushless dc motor (BLDCM) have resulted in their application for electrohydrostatic (EH) and electromechanical (EM) actuation systems. But to effectively apply the BLDCM requires accurate prediction of performance. The minimum necessary performance characteristics are motor torque versus speed, peak and average supply current and efficiency. BLDCM nonlinear simulation software specifically adapted for torque-speed prediction is presented. The capability of the software to quickly and accurately predict performance has been verified on fractional to integral HP motor sizes, and is presented. Additionally, the capability of torque-speed prediction with commutation angle advance is demonstrated.
Fast and accurate predictions of covalent bonds in chemical space
NASA Astrophysics Data System (ADS)
Chang, K. Y. Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole
2016-05-01
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (˜1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H 2+ . Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi
Fast and accurate predictions of covalent bonds in chemical space.
Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole
2016-05-07
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi
Evaluation of wave runup predictions from numerical and parametric models
Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.
2014-01-01
Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.
Fast and Accurate Learning When Making Discrete Numerical Estimates
Sanborn, Adam N.; Beierholm, Ulrik R.
2016-01-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
On the Accurate Prediction of CME Arrival At the Earth
NASA Astrophysics Data System (ADS)
Zhang, Jie; Hess, Phillip
2016-07-01
We will discuss relevant issues regarding the accurate prediction of CME arrival at the Earth, from both observational and theoretical points of view. In particular, we clarify the importance of separating the study of CME ejecta from the ejecta-driven shock in interplanetary CMEs (ICMEs). For a number of CME-ICME events well observed by SOHO/LASCO, STEREO-A and STEREO-B, we carry out the 3-D measurements by superimposing geometries onto both the ejecta and sheath separately. These measurements are then used to constrain a Drag-Based Model, which is improved through a modification of including height dependence of the drag coefficient into the model. Combining all these factors allows us to create predictions for both fronts at 1 AU and compare with actual in-situ observations. We show an ability to predict the sheath arrival with an average error of under 4 hours, with an RMS error of about 1.5 hours. For the CME ejecta, the error is less than two hours with an RMS error within an hour. Through using the best observations of CMEs, we show the power of our method in accurately predicting CME arrival times. The limitation and implications of our accurate prediction method will be discussed.
Numerical ability predicts mortgage default.
Gerardi, Kristopher; Goette, Lorenz; Meier, Stephan
2013-07-09
Unprecedented levels of US subprime mortgage defaults precipitated a severe global financial crisis in late 2008, plunging much of the industrialized world into a deep recession. However, the fundamental reasons for why US mortgages defaulted at such spectacular rates remain largely unknown. This paper presents empirical evidence showing that the ability to perform basic mathematical calculations is negatively associated with the propensity to default on one's mortgage. We measure several aspects of financial literacy and cognitive ability in a survey of subprime mortgage borrowers who took out loans in 2006 and 2007, and match them to objective, detailed administrative data on mortgage characteristics and payment histories. The relationship between numerical ability and mortgage default is robust to controlling for a broad set of sociodemographic variables, and is not driven by other aspects of cognitive ability. We find no support for the hypothesis that numerical ability impacts mortgage outcomes through the choice of the mortgage contract. Rather, our results suggest that individuals with limited numerical ability default on their mortgage due to behavior unrelated to the initial choice of their mortgage.
Towards numerical prediction of cavitation erosion
Fivel, Marc; Franc, Jean-Pierre; Chandra Roy, Samir
2015-01-01
This paper is intended to provide a potential basis for a numerical prediction of cavitation erosion damage. The proposed method can be divided into two steps. The first step consists in determining the loading conditions due to cavitation bubble collapses. It is shown that individual pits observed on highly polished metallic samples exposed to cavitation for a relatively small time can be considered as the signature of bubble collapse. By combining pitting tests with an inverse finite-element modelling (FEM) of the material response to a representative impact load, loading conditions can be derived for each individual bubble collapse in terms of stress amplitude (in gigapascals) and radial extent (in micrometres). This step requires characterizing as accurately as possible the properties of the material exposed to cavitation. This characterization should include the effect of strain rate, which is known to be high in cavitation erosion (typically of the order of several thousands s−1). Nanoindentation techniques as well as compressive tests at high strain rate using, for example, a split Hopkinson pressure bar test system may be used. The second step consists in developing an FEM approach to simulate the material response to the repetitive impact loads determined in step 1. This includes a detailed analysis of the hardening process (isotropic versus kinematic) in order to properly account for fatigue as well as the development of a suitable model of material damage and failure to account for mass loss. Although the whole method is not yet fully operational, promising results are presented that show that such a numerical method might be, in the long term, an alternative to correlative techniques used so far for cavitation erosion prediction. PMID:26442139
Towards numerical prediction of cavitation erosion.
Fivel, Marc; Franc, Jean-Pierre; Chandra Roy, Samir
2015-10-06
This paper is intended to provide a potential basis for a numerical prediction of cavitation erosion damage. The proposed method can be divided into two steps. The first step consists in determining the loading conditions due to cavitation bubble collapses. It is shown that individual pits observed on highly polished metallic samples exposed to cavitation for a relatively small time can be considered as the signature of bubble collapse. By combining pitting tests with an inverse finite-element modelling (FEM) of the material response to a representative impact load, loading conditions can be derived for each individual bubble collapse in terms of stress amplitude (in gigapascals) and radial extent (in micrometres). This step requires characterizing as accurately as possible the properties of the material exposed to cavitation. This characterization should include the effect of strain rate, which is known to be high in cavitation erosion (typically of the order of several thousands s(-1)). Nanoindentation techniques as well as compressive tests at high strain rate using, for example, a split Hopkinson pressure bar test system may be used. The second step consists in developing an FEM approach to simulate the material response to the repetitive impact loads determined in step 1. This includes a detailed analysis of the hardening process (isotropic versus kinematic) in order to properly account for fatigue as well as the development of a suitable model of material damage and failure to account for mass loss. Although the whole method is not yet fully operational, promising results are presented that show that such a numerical method might be, in the long term, an alternative to correlative techniques used so far for cavitation erosion prediction.
Accurate numerical simulation of short fiber optical parametric amplifiers.
Marhic, M E; Rieznik, A A; Kalogerakis, G; Braimiotis, C; Fragnito, H L; Kazovsky, L G
2008-03-17
We improve the accuracy of numerical simulations for short fiber optical parametric amplifiers (OPAs). Instead of using the usual coarse-step method, we adopt a model for birefringence and dispersion which uses fine-step variations of the parameters. We also improve the split-step Fourier method by exactly treating the nonlinear ellipse rotation terms. We find that results obtained this way for two-pump OPAs can be significantly different from those obtained by using the usual coarse-step fiber model, and/or neglecting ellipse rotation terms.
Accurate spectral numerical schemes for kinetic equations with energy diffusion
NASA Astrophysics Data System (ADS)
Wilkening, Jon; Cerfon, Antoine J.; Landreman, Matt
2015-08-01
We examine the merits of using a family of polynomials that are orthogonal with respect to a non-classical weight function to discretize the speed variable in continuum kinetic calculations. We consider a model one-dimensional partial differential equation describing energy diffusion in velocity space due to Fokker-Planck collisions. This relatively simple case allows us to compare the results of the projected dynamics with an expensive but highly accurate spectral transform approach. It also allows us to integrate in time exactly, and to focus entirely on the effectiveness of the discretization of the speed variable. We show that for a fixed number of modes or grid points, the non-classical polynomials can be many orders of magnitude more accurate than classical Hermite polynomials or finite-difference solvers for kinetic equations in plasma physics. We provide a detailed analysis of the difference in behavior and accuracy of the two families of polynomials. For the non-classical polynomials, if the initial condition is not smooth at the origin when interpreted as a three-dimensional radial function, the exact solution leaves the polynomial subspace for a time, but returns (up to roundoff accuracy) to the same point evolved to by the projected dynamics in that time. By contrast, using classical polynomials, the exact solution differs significantly from the projected dynamics solution when it returns to the subspace. We also explore the connection between eigenfunctions of the projected evolution operator and (non-normalizable) eigenfunctions of the full evolution operator, as well as the effect of truncating the computational domain.
Accurate numerical solutions for elastic-plastic models. [LMFBR
Schreyer, H. L.; Kulak, R. F.; Kramer, J. M.
1980-03-01
The accuracy of two integration algorithms is studied for the common engineering condition of a von Mises, isotropic hardening model under plane stress. Errors in stress predictions for given total strain increments are expressed with contour plots of two parameters: an angle in the pi plane and the difference between the exact and computed yield-surface radii. The two methods are the tangent-predictor/radial-return approach and the elastic-predictor/radial-corrector algorithm originally developed by Mendelson. The accuracy of a combined tangent-predictor/radial-corrector algorithm is also investigated.
Numerical assessment of accurate measurements of laminar flame speed
NASA Astrophysics Data System (ADS)
Goulier, Joules; Bizon, Katarzyna; Chaumeix, Nabiha; Meynet, Nicolas; Continillo, Gaetano
2016-12-01
In combustion, the laminar flame speed constitutes an important parameter that reflects the chemistry of oxidation for a given fuel, along with its transport and thermal properties. Laminar flame speeds are used (i) in turbulent models used in CFD codes, and (ii) to validate detailed or reduced mechanisms, often derived from studies using ideal reactors and in diluted conditions as in jet stirred reactors and in shock tubes. End-users of such mechanisms need to have an assessment of their capability to predict the correct heat released by combustion in realistic conditions. In this view, the laminar flame speed constitutes a very convenient parameter, and it is then very important to have a good knowledge of the experimental errors involved with its determination. Stationary configurations (Bunsen burners, counter-flow flames, heat flux burners) or moving flames (tubes, spherical vessel, soap bubble) can be used. The spherical expanding flame configuration has recently become popular, since it can be used at high pressures and temperatures. With this method, the flame speed is not measured directly, but derived through the recording of the flame radius. The method used to process the radius history will have an impact on the estimated flame speed. Aim of this work is to propose a way to derive the laminar flame speed from experimental recording of expanding flames, and to assess the error magnitude.
Passive samplers accurately predict PAH levels in resident crayfish.
Paulik, L Blair; Smith, Brian W; Bergmann, Alan J; Sower, Greg J; Forsberg, Norman D; Teeguarden, Justin G; Anderson, Kim A
2016-02-15
Contamination of resident aquatic organisms is a major concern for environmental risk assessors. However, collecting organisms to estimate risk is often prohibitively time and resource-intensive. Passive sampling accurately estimates resident organism contamination, and it saves time and resources. This study used low density polyethylene (LDPE) passive water samplers to predict polycyclic aromatic hydrocarbon (PAH) levels in signal crayfish, Pacifastacus leniusculus. Resident crayfish were collected at 5 sites within and outside of the Portland Harbor Superfund Megasite (PHSM) in the Willamette River in Portland, Oregon. LDPE deployment was spatially and temporally paired with crayfish collection. Crayfish visceral and tail tissue, as well as water-deployed LDPE, were extracted and analyzed for 62 PAHs using GC-MS/MS. Freely-dissolved concentrations (Cfree) of PAHs in water were calculated from concentrations in LDPE. Carcinogenic risks were estimated for all crayfish tissues, using benzo[a]pyrene equivalent concentrations (BaPeq). ∑PAH were 5-20 times higher in viscera than in tails, and ∑BaPeq were 6-70 times higher in viscera than in tails. Eating only tail tissue of crayfish would therefore significantly reduce carcinogenic risk compared to also eating viscera. Additionally, PAH levels in crayfish were compared to levels in crayfish collected 10 years earlier. PAH levels in crayfish were higher upriver of the PHSM and unchanged within the PHSM after the 10-year period. Finally, a linear regression model predicted levels of 34 PAHs in crayfish viscera with an associated R-squared value of 0.52 (and a correlation coefficient of 0.72), using only the Cfree PAHs in water. On average, the model predicted PAH concentrations in crayfish tissue within a factor of 2.4 ± 1.8 of measured concentrations. This affirms that passive water sampling accurately estimates PAH contamination in crayfish. Furthermore, the strong predictive ability of this simple model suggests
Quinci, Federico; Dressler, Matthew; Strickland, Anthony M; Limbert, Georges
2014-04-01
Considerable progress has been made in understanding implant wear and developing numerical models to predict wear for new orthopaedic devices. However any model of wear could be improved through a more accurate representation of the biomaterial mechanics, including time-varying dynamic and inelastic behaviour such as viscosity and plastic deformation. In particular, most computational models of wear of UHMWPE implement a time-invariant version of Archard's law that links the volume of worn material to the contact pressure between the metal implant and the polymeric tibial insert. During in-vivo conditions, however, the contact area is a time-varying quantity and is therefore dependent upon the dynamic deformation response of the material. From this observation one can conclude that creep deformations of UHMWPE may be very important to consider when conducting computational wear analyses, in stark contrast to what can be found in the literature. In this study, different numerical modelling techniques are compared with experimental creep testing on a unicondylar knee replacement system in a physiologically representative context. Linear elastic, plastic and time-varying visco-dynamic models are benchmarked using literature data to predict contact deformations, pressures and areas. The aim of this study is to elucidate the contributions of viscoelastic and plastic effects on these surface quantities. It is concluded that creep deformations have a significant effect on the contact pressure measured (experiment) and calculated (computational models) at the surface of the UHMWPE unicondylar insert. The use of a purely elastoplastic constitutive model for UHMWPE lead to compressive deformations of the insert which are much smaller than those predicted by a creep-capturing viscoelastic model (and those measured experimentally). This shows again the importance of including creep behaviour into a constitutive model in order to predict the right level of surface deformation
The quiet revolution of numerical weather prediction
NASA Astrophysics Data System (ADS)
Bauer, Peter; Thorpe, Alan; Brunet, Gilbert
2015-09-01
Advances in numerical weather prediction represent a quiet revolution because they have resulted from a steady accumulation of scientific knowledge and technological advances over many years that, with only a few exceptions, have not been associated with the aura of fundamental physics breakthroughs. Nonetheless, the impact of numerical weather prediction is among the greatest of any area of physical science. As a computational problem, global weather prediction is comparable to the simulation of the human brain and of the evolution of the early Universe, and it is performed every day at major operational centres across the world.
A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction
NASA Technical Reports Server (NTRS)
Bockelie, Michael J.; Eiseman, Peter R.
1990-01-01
A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.
Inverter Modeling For Accurate Energy Predictions Of Tracking HCPV Installations
NASA Astrophysics Data System (ADS)
Bowman, J.; Jensen, S.; McDonald, Mark
2010-10-01
High efficiency high concentration photovoltaic (HCPV) solar plants of megawatt scale are now operational, and opportunities for expanded adoption are plentiful. However, effective bidding for sites requires reliable prediction of energy production. HCPV module nameplate power is rated for specific test conditions; however, instantaneous HCPV power varies due to site specific irradiance and operating temperature, and is degraded by soiling, protective stowing, shading, and electrical connectivity. These factors interact with the selection of equipment typically supplied by third parties, e.g., wire gauge and inverters. We describe a time sequence model accurately accounting for these effects that predicts annual energy production, with specific reference to the impact of the inverter on energy output and interactions between system-level design decisions and the inverter. We will also show two examples, based on an actual field design, of inverter efficiency calculations and the interaction between string arrangements and inverter selection.
Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates
Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; Rose, Kristie L.; Tabb, David L.
2013-03-07
In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of charged peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.
Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates
Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; ...
2013-03-07
In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of chargedmore » peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.« less
Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates
Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel C.; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; Rose, Kristie L.; Tabb, David L.
2013-01-01
In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of charged peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification. PMID:23499924
Mouse models of human AML accurately predict chemotherapy response
Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.
2009-01-01
The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691
Mouse models of human AML accurately predict chemotherapy response.
Zuber, Johannes; Radtke, Ina; Pardee, Timothy S; Zhao, Zhen; Rappaport, Amy R; Luo, Weijun; McCurrach, Mila E; Yang, Miao-Miao; Dolan, M Eileen; Kogan, Scott C; Downing, James R; Lowe, Scott W
2009-04-01
The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients.
Turbulence Models for Accurate Aerothermal Prediction in Hypersonic Flows
NASA Astrophysics Data System (ADS)
Zhang, Xiang-Hong; Wu, Yi-Zao; Wang, Jiang-Feng
Accurate description of the aerodynamic and aerothermal environment is crucial to the integrated design and optimization for high performance hypersonic vehicles. In the simulation of aerothermal environment, the effect of viscosity is crucial. The turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating. In this paper, three turbulent models were studied: the one-equation eddy viscosity transport model of Spalart-Allmaras, the Wilcox k-ω model and the Menter SST model. For the k-ω model and SST model, the compressibility correction, press dilatation and low Reynolds number correction were considered. The influence of these corrections for flow properties were discussed by comparing with the results without corrections. In this paper the emphasis is on the assessment and evaluation of the turbulence models in prediction of heat transfer as applied to a range of hypersonic flows with comparison to experimental data. This will enable establishing factor of safety for the design of thermal protection systems of hypersonic vehicle.
Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics
Noecker, Cecilia; Schaefer, Krista; Zaccheo, Kelly; Yang, Yiding; Day, Judy; Ganusov, Vitaly V.
2015-01-01
Upon infection of a new host, human immunodeficiency virus (HIV) replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV). First, we found that the mode of virus production by infected cells (budding vs. bursting) has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral dose. These results
IRIS: Towards an Accurate and Fast Stage Weight Prediction Method
NASA Astrophysics Data System (ADS)
Taponier, V.; Balu, A.
2002-01-01
The knowledge of the structural mass fraction (or the mass ratio) of a given stage, which affects the performance of a rocket, is essential for the analysis of new or upgraded launchers or stages, whose need is increased by the quick evolution of the space programs and by the necessity of their adaptation to the market needs. The availability of this highly scattered variable, ranging between 0.05 and 0.15, is of primary importance at the early steps of the preliminary design studies. At the start of the staging and performance studies, the lack of frozen weight data (to be obtained later on from propulsion, trajectory and sizing studies) leads to rely on rough estimates, generally derived from printed sources and adapted. When needed, a consolidation can be acquired trough a specific analysis activity involving several techniques and implying additional effort and time. The present empirical approach allows thus to get approximated values (i.e. not necessarily accurate or consistent), inducing some result inaccuracy as well as, consequently, difficulties of performance ranking for a multiple option analysis, and an increase of the processing duration. This forms a classical harsh fact of the preliminary design system studies, insufficiently discussed to date. It appears therefore highly desirable to have, for all the evaluation activities, a reliable, fast and easy-to-use weight or mass fraction prediction method. Additionally, the latter should allow for a pre selection of the alternative preliminary configurations, making possible a global system approach. For that purpose, an attempt at modeling has been undertaken, whose objective was the determination of a parametric formulation of the mass fraction, to be expressed from a limited number of parameters available at the early steps of the project. It is based on the innovative use of a statistical method applicable to a variable as a function of several independent parameters. A specific polynomial generator
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Railkar, Sudhir B.
1988-01-01
This paper represents an attempt to apply extensions of a hybrid transfinite element computational approach for accurately predicting thermoelastic stress waves. The applicability of the present formulations for capturing the thermal stress waves induced by boundary heating for the well known Danilovskaya problems is demonstrated. A unique feature of the proposed formulations for applicability to the Danilovskaya problem of thermal stress waves in elastic solids lies in the hybrid nature of the unified formulations and the development of special purpose transfinite elements in conjunction with the classical Galerkin techniques and transformation concepts. Numerical test cases validate the applicability and superior capability to capture the thermal stress waves induced due to boundary heating.
A numerical method for predicting hypersonic flowfields
NASA Technical Reports Server (NTRS)
Maccormack, Robert W.; Candler, Graham V.
1989-01-01
The flow about a body traveling at hypersonic speed is energetic enough to cause the atmospheric gases to chemically react and reach states in thermal nonequilibrium. The prediction of hypersonic flowfields requires a numerical method capable of solving the conservation equations of fluid flow, the chemical rate equations for specie formation and dissociation, and the transfer of energy relations between translational and vibrational temperature states. Because the number of equations to be solved is large, the numerical method should also be as efficient as possible. The proposed paper presents a fully implicit method that fully couples the solution of the fluid flow equations with the gas physics and chemistry relations. The method flux splits the inviscid flow terms, central differences of the viscous terms, preserves element conservation in the strong chemistry source terms, and solves the resulting block matrix equation by Gauss Seidel line relaxation.
Deng, Xin; Gumm, Jordan; Karki, Suman; Eickholt, Jesse; Cheng, Jianlin
2015-01-01
Protein disordered regions are segments of a protein chain that do not adopt a stable structure. Thus far, a variety of protein disorder prediction methods have been developed and have been widely used, not only in traditional bioinformatics domains, including protein structure prediction, protein structure determination and function annotation, but also in many other biomedical fields. The relationship between intrinsically-disordered proteins and some human diseases has played a significant role in disorder prediction in disease identification and epidemiological investigations. Disordered proteins can also serve as potential targets for drug discovery with an emphasis on the disordered-to-ordered transition in the disordered binding regions, and this has led to substantial research in drug discovery or design based on protein disordered region prediction. Furthermore, protein disorder prediction has also been applied to healthcare by predicting the disease risk of mutations in patients and studying the mechanistic basis of diseases. As the applications of disorder prediction increase, so too does the need to make quick and accurate predictions. To fill this need, we also present a new approach to predict protein residue disorder using wide sequence windows that is applicable on the genomic scale. PMID:26198229
Deng, Xin; Gumm, Jordan; Karki, Suman; Eickholt, Jesse; Cheng, Jianlin
2015-07-07
Protein disordered regions are segments of a protein chain that do not adopt a stable structure. Thus far, a variety of protein disorder prediction methods have been developed and have been widely used, not only in traditional bioinformatics domains, including protein structure prediction, protein structure determination and function annotation, but also in many other biomedical fields. The relationship between intrinsically-disordered proteins and some human diseases has played a significant role in disorder prediction in disease identification and epidemiological investigations. Disordered proteins can also serve as potential targets for drug discovery with an emphasis on the disordered-to-ordered transition in the disordered binding regions, and this has led to substantial research in drug discovery or design based on protein disordered region prediction. Furthermore, protein disorder prediction has also been applied to healthcare by predicting the disease risk of mutations in patients and studying the mechanistic basis of diseases. As the applications of disorder prediction increase, so too does the need to make quick and accurate predictions. To fill this need, we also present a new approach to predict protein residue disorder using wide sequence windows that is applicable on the genomic scale.
Prediction of Preoperative Anxiety in Children: Who is Most Accurate?
MacLaren, Jill E.; Thompson, Caitlin; Weinberg, Megan; Fortier, Michelle A.; Morrison, Debra E.; Perret, Danielle; Kain, Zeev N.
2009-01-01
Background In this investigation, we sought to assess the ability of pediatric attending anesthesiologists, resident anesthesiologists and mothers to predict anxiety during induction of anesthesia in 2 to 16-year-old children (n=125). Methods Anesthesiologists and mothers provided predictions using a visual analog scale and children's anxiety was assessed using a valid behavior observation tool the Modified Yale Preoperative Anxiety Scale (mYPAS). All mothers were present during anesthetic induction and no child received sedative premedication. Correlational analyses were conducted. Results A total of 125 children aged 2 to 16 years, their mothers, and their attending pediatric anesthesiologists and resident anesthesiologists were studied. Correlational analyses revealed significant associations between attending predictions and child anxiety at induction (rs= 0.38, p<0.001). Resident anesthesiologist and mother predictions were not significantly related to children's anxiety during induction (rs = 0.01 and 0.001, respectively). In terms of accuracy of prediction, 47.2% of predictions made by attending anesthesiologists were within one standard deviation of the observed anxiety exhibited by the child, and 70.4% of predictions were within 2 standard deviations. Conclusions We conclude that attending anesthesiologists who practice in pediatric settings are better than mothers in predicting the anxiety of children during induction of anesthesia. While this finding has significant clinical implications, it is unclear if it can be extended to attending anesthesiologists whose practice is not mostly pediatric anesthesia. PMID:19448201
Is Three-Dimensional Soft Tissue Prediction by Software Accurate?
Nam, Ki-Uk; Hong, Jongrak
2015-11-01
The authors assessed whether virtual surgery, performed with a soft tissue prediction program, could correctly simulate the actual surgical outcome, focusing on soft tissue movement. Preoperative and postoperative computed tomography (CT) data for 29 patients, who had undergone orthognathic surgery, were obtained and analyzed using the Simplant Pro software. The program made a predicted soft tissue image (A) based on presurgical CT data. After the operation, we obtained actual postoperative CT data and an actual soft tissue image (B) was generated. Finally, the 2 images (A and B) were superimposed and analyzed differences between the A and B. Results were grouped in 2 classes: absolute values and vector values. In the absolute values, the left mouth corner was the most significant error point (2.36 mm). The right mouth corner (2.28 mm), labrale inferius (2.08 mm), and the pogonion (2.03 mm) also had significant errors. In vector values, prediction of the right-left side had a left-sided tendency, the superior-inferior had a superior tendency, and the anterior-posterior showed an anterior tendency. As a result, with this program, the position of points tended to be located more left, anterior, and superior than the "real" situation. There is a need to improve the prediction accuracy for soft tissue images. Such software is particularly valuable in predicting craniofacial soft tissues landmarks, such as the pronasale. With this software, landmark positions were most inaccurate in terms of anterior-posterior predictions.
Fast and accurate automatic structure prediction with HHpred.
Hildebrand, Andrea; Remmert, Michael; Biegert, Andreas; Söding, Johannes
2009-01-01
Automated protein structure prediction is becoming a mainstream tool for biological research. This has been fueled by steady improvements of publicly available automated servers over the last decade, in particular their ability to build good homology models for an increasing number of targets by reliably detecting and aligning more and more remotely homologous templates. Here, we describe the three fully automated versions of the HHpred server that participated in the community-wide blind protein structure prediction competition CASP8. What makes HHpred unique is the combination of usability, short response times (typically under 15 min) and a model accuracy that is competitive with those of the best servers in CASP8.
NASA Astrophysics Data System (ADS)
Campforts, Benjamin; Schwanghart, Wolfgang; Govers, Gerard
2017-01-01
Landscape evolution models (LEMs) allow the study of earth surface responses to changing climatic and tectonic forcings. While much effort has been devoted to the development of LEMs that simulate a wide range of processes, the numerical accuracy of these models has received less attention. Most LEMs use first-order accurate numerical methods that suffer from substantial numerical diffusion. Numerical diffusion particularly affects the solution of the advection equation and thus the simulation of retreating landforms such as cliffs and river knickpoints. This has potential consequences for the integrated response of the simulated landscape. Here we test a higher-order flux-limiting finite volume method that is total variation diminishing (TVD-FVM) to solve the partial differential equations of river incision and tectonic displacement. We show that using the TVD-FVM to simulate river incision significantly influences the evolution of simulated landscapes and the spatial and temporal variability of catchment-wide erosion rates. Furthermore, a two-dimensional TVD-FVM accurately simulates the evolution of landscapes affected by lateral tectonic displacement, a process whose simulation was hitherto largely limited to LEMs with flexible spatial discretization. We implement the scheme in TTLEM (TopoToolbox Landscape Evolution Model), a spatially explicit, raster-based LEM for the study of fluvially eroding landscapes in TopoToolbox 2.
Accurate perception of negative emotions predicts functional capacity in schizophrenia.
Abram, Samantha V; Karpouzian, Tatiana M; Reilly, James L; Derntl, Birgit; Habel, Ute; Smith, Matthew J
2014-04-30
Several studies suggest facial affect perception (FAP) deficits in schizophrenia are linked to poorer social functioning. However, whether reduced functioning is associated with inaccurate perception of specific emotional valence or a global FAP impairment remains unclear. The present study examined whether impairment in the perception of specific emotional valences (positive, negative) and neutrality were uniquely associated with social functioning, using a multimodal social functioning battery. A sample of 59 individuals with schizophrenia and 41 controls completed a computerized FAP task, and measures of functional capacity, social competence, and social attainment. Participants also underwent neuropsychological testing and symptom assessment. Regression analyses revealed that only accurately perceiving negative emotions explained significant variance (7.9%) in functional capacity after accounting for neurocognitive function and symptoms. Partial correlations indicated that accurately perceiving anger, in particular, was positively correlated with functional capacity. FAP for positive, negative, or neutral emotions were not related to social competence or social attainment. Our findings were consistent with prior literature suggesting negative emotions are related to functional capacity in schizophrenia. Furthermore, the observed relationship between perceiving anger and performance of everyday living skills is novel and warrants further exploration.
Towards Accurate Ab Initio Predictions of the Spectrum of Methane
NASA Technical Reports Server (NTRS)
Schwenke, David W.; Kwak, Dochan (Technical Monitor)
2001-01-01
We have carried out extensive ab initio calculations of the electronic structure of methane, and these results are used to compute vibrational energy levels. We include basis set extrapolations, core-valence correlation, relativistic effects, and Born- Oppenheimer breakdown terms in our calculations. Our ab initio predictions of the lowest lying levels are superb.
Accurate Theoretical Prediction of the Properties of Energetic Materials
2007-11-02
calculations (e.g. Cheetah ). 8. Sensitivity. The structure prediction and lattice potential work will serve as a platform to examine impact/shock...nitromethane molecules. (In an extension of the present work, we will freeze the internal coordinates of the molecules and assess the extent to which the
Learning regulatory programs that accurately predict differential expression with MEDUSA.
Kundaje, Anshul; Lianoglou, Steve; Li, Xuejing; Quigley, David; Arias, Marta; Wiggins, Chris H; Zhang, Li; Leslie, Christina
2007-12-01
Inferring gene regulatory networks from high-throughput genomic data is one of the central problems in computational biology. In this paper, we describe a predictive modeling approach for studying regulatory networks, based on a machine learning algorithm called MEDUSA. MEDUSA integrates promoter sequence, mRNA expression, and transcription factor occupancy data to learn gene regulatory programs that predict the differential expression of target genes. Instead of using clustering or correlation of expression profiles to infer regulatory relationships, MEDUSA determines condition-specific regulators and discovers regulatory motifs that mediate the regulation of target genes. In this way, MEDUSA meaningfully models biological mechanisms of transcriptional regulation. MEDUSA solves the problem of predicting the differential (up/down) expression of target genes by using boosting, a technique from statistical learning, which helps to avoid overfitting as the algorithm searches through the high-dimensional space of potential regulators and sequence motifs. Experimental results demonstrate that MEDUSA achieves high prediction accuracy on held-out experiments (test data), that is, data not seen in training. We also present context-specific analysis of MEDUSA regulatory programs for DNA damage and hypoxia, demonstrating that MEDUSA identifies key regulators and motifs in these processes. A central challenge in the field is the difficulty of validating reverse-engineered networks in the absence of a gold standard. Our approach of learning regulatory programs provides at least a partial solution for the problem: MEDUSA's prediction accuracy on held-out data gives a concrete and statistically sound way to validate how well the algorithm performs. With MEDUSA, statistical validation becomes a prerequisite for hypothesis generation and network building rather than a secondary consideration.
Standardized EEG interpretation accurately predicts prognosis after cardiac arrest
Rossetti, Andrea O.; van Rootselaar, Anne-Fleur; Wesenberg Kjaer, Troels; Horn, Janneke; Ullén, Susann; Friberg, Hans; Nielsen, Niklas; Rosén, Ingmar; Åneman, Anders; Erlinge, David; Gasche, Yvan; Hassager, Christian; Hovdenes, Jan; Kjaergaard, Jesper; Kuiper, Michael; Pellis, Tommaso; Stammet, Pascal; Wanscher, Michael; Wetterslev, Jørn; Wise, Matt P.; Cronberg, Tobias
2016-01-01
Objective: To identify reliable predictors of outcome in comatose patients after cardiac arrest using a single routine EEG and standardized interpretation according to the terminology proposed by the American Clinical Neurophysiology Society. Methods: In this cohort study, 4 EEG specialists, blinded to outcome, evaluated prospectively recorded EEGs in the Target Temperature Management trial (TTM trial) that randomized patients to 33°C vs 36°C. Routine EEG was performed in patients still comatose after rewarming. EEGs were classified into highly malignant (suppression, suppression with periodic discharges, burst-suppression), malignant (periodic or rhythmic patterns, pathological or nonreactive background), and benign EEG (absence of malignant features). Poor outcome was defined as best Cerebral Performance Category score 3–5 until 180 days. Results: Eight TTM sites randomized 202 patients. EEGs were recorded in 103 patients at a median 77 hours after cardiac arrest; 37% had a highly malignant EEG and all had a poor outcome (specificity 100%, sensitivity 50%). Any malignant EEG feature had a low specificity to predict poor prognosis (48%) but if 2 malignant EEG features were present specificity increased to 96% (p < 0.001). Specificity and sensitivity were not significantly affected by targeted temperature or sedation. A benign EEG was found in 1% of the patients with a poor outcome. Conclusions: Highly malignant EEG after rewarming reliably predicted poor outcome in half of patients without false predictions. An isolated finding of a single malignant feature did not predict poor outcome whereas a benign EEG was highly predictive of a good outcome. PMID:26865516
How Accurately Can We Predict Eclipses for Algol? (Poster abstract)
NASA Astrophysics Data System (ADS)
Turner, D.
2016-06-01
(Abstract only) beta Persei, or Algol, is a very well known eclipsing binary system consisting of a late B-type dwarf that is regularly eclipsed by a GK subgiant every 2.867 days. Eclipses, which last about 8 hours, are regular enough that predictions for times of minima are published in various places, Sky & Telescope magazine and The Observer's Handbook, for example. But eclipse minimum lasts for less than a half hour, whereas subtle mistakes in the current ephemeris for the star can result in predictions that are off by a few hours or more. The Algol system is fairly complex, with the Algol A and Algol B eclipsing system also orbited by Algol C with an orbital period of nearly 2 years. Added to that are complex long-term O-C variations with a periodicity of almost two centuries that, although suggested by Hoffmeister to be spurious, fit the type of light travel time variations expected for a fourth star also belonging to the system. The AB sub-system also undergoes mass transfer events that add complexities to its O-C behavior. Is it actually possible to predict precise times of eclipse minima for Algol months in advance given such complications, or is it better to encourage ongoing observations of the star so that O-C variations can be tracked in real time?
Predictive rendering for accurate material perception: modeling and rendering fabrics
NASA Astrophysics Data System (ADS)
Bala, Kavita
2012-03-01
In computer graphics, rendering algorithms are used to simulate the appearance of objects and materials in a wide range of applications. Designers and manufacturers rely entirely on these rendered images to previsualize scenes and products before manufacturing them. They need to differentiate between different types of fabrics, paint finishes, plastics, and metals, often with subtle differences, for example, between silk and nylon, formaica and wood. Thus, these applications need predictive algorithms that can produce high-fidelity images that enable such subtle material discrimination.
NASA Technical Reports Server (NTRS)
VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R,; Hathaway, Michael D.; Okiishi, Theodore H.
2000-01-01
The tip clearance flows of transonic compressor rotors are important because they have a significant impact on rotor and stage performance. While numerical simulations of these flows are quite sophisticated. they are seldom verified through rigorous comparisons of numerical and measured data because these kinds of measurements are rare in the detail necessary to be useful in high-speed machines. In this paper we compare measured tip clearance flow details (e.g. trajectory and radial extent) with corresponding data obtained from a numerical simulation. Recommendations for achieving accurate numerical simulation of tip clearance flows are presented based on this comparison. Laser Doppler Velocimeter (LDV) measurements acquired in a transonic compressor rotor, NASA Rotor 35, are used. The tip clearance flow field of this transonic rotor was simulated using a Navier-Stokes turbomachinery solver that incorporates an advanced k-epsilon turbulence model derived for flows that are not in local equilibrium. Comparison between measured and simulated results indicates that simulation accuracy is primarily dependent upon the ability of the numerical code to resolve important details of a wall-bounded shear layer formed by the relative motion between the over-tip leakage flow and the shroud wall. A simple method is presented for determining the strength of this shear layer.
NASA Astrophysics Data System (ADS)
Wosnik, M.; Bachant, P.
2014-12-01
Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of
Objective criteria accurately predict amputation following lower extremity trauma.
Johansen, K; Daines, M; Howey, T; Helfet, D; Hansen, S T
1990-05-01
MESS (Mangled Extremity Severity Score) is a simple rating scale for lower extremity trauma, based on skeletal/soft-tissue damage, limb ischemia, shock, and age. Retrospective analysis of severe lower extremity injuries in 25 trauma victims demonstrated a significant difference between MESS values for 17 limbs ultimately salvaged (mean, 4.88 +/- 0.27) and nine requiring amputation (mean, 9.11 +/- 0.51) (p less than 0.01). A prospective trial of MESS in lower extremity injuries managed at two trauma centers again demonstrated a significant difference between MESS values of 14 salvaged (mean, 4.00 +/- 0.28) and 12 doomed (mean, 8.83 +/- 0.53) limbs (p less than 0.01). In both the retrospective survey and the prospective trial, a MESS value greater than or equal to 7 predicted amputation with 100% accuracy. MESS may be useful in selecting trauma victims whose irretrievably injured lower extremities warrant primary amputation.
Seth A Veitzer
2008-10-21
Effects of stray electrons are a main factor limiting performance of many accelerators. Because heavy-ion fusion (HIF) accelerators will operate in regimes of higher current and with walls much closer to the beam than accelerators operating today, stray electrons might have a large, detrimental effect on the performance of an HIF accelerator. A primary source of stray electrons is electrons generated when halo ions strike the beam pipe walls. There is some research on these types of secondary electrons for the HIF community to draw upon, but this work is missing one crucial ingredient: the effect of grazing incidence. The overall goal of this project was to develop the numerical tools necessary to accurately model the effect of grazing incidence on the behavior of halo ions in a HIF accelerator, and further, to provide accurate models of heavy ion stopping powers with applications to ICF, WDM, and HEDP experiments.
Towards numerically accurate many-body perturbation theory: Short-range correlation effects
Gulans, Andris
2014-10-28
The example of the uniform electron gas is used for showing that the short-range electron correlation is difficult to handle numerically, while it noticeably contributes to the self-energy. Nonetheless, in condensed-matter applications studied with advanced methods, such as the GW and random-phase approximations, it is common to neglect contributions due to high-momentum (large q) transfers. Then, the short-range correlation is poorly described, which leads to inaccurate correlation energies and quasiparticle spectra. To circumvent this problem, an accurate extrapolation scheme is proposed. It is based on an analytical derivation for the uniform electron gas presented in this paper, and it provides an explanation why accurate GW quasiparticle spectra are easy to obtain for some compounds and very difficult for others.
Improved Ecosystem Predictions of the California Current System via Accurate Light Calculations
2011-09-30
System via Accurate Light Calculations Curtis D. Mobley Sequoia Scientific, Inc. 2700 Richards Road, Suite 107 Bellevue, WA 98005 phone: 425...incorporate extremely fast but accurate light calculations into coupled physical-biological-optical ocean ecosystem models as used for operational three...dimensional ecosystem predictions. Improvements in light calculations lead to improvements in predictions of chlorophyll concentrations and other
Generating highly accurate prediction hypotheses through collaborative ensemble learning
Arsov, Nino; Pavlovski, Martin; Basnarkov, Lasko; Kocarev, Ljupco
2017-01-01
Ensemble generation is a natural and convenient way of achieving better generalization performance of learning algorithms by gathering their predictive capabilities. Here, we nurture the idea of ensemble-based learning by combining bagging and boosting for the purpose of binary classification. Since the former improves stability through variance reduction, while the latter ameliorates overfitting, the outcome of a multi-model that combines both strives toward a comprehensive net-balancing of the bias-variance trade-off. To further improve this, we alter the bagged-boosting scheme by introducing collaboration between the multi-model’s constituent learners at various levels. This novel stability-guided classification scheme is delivered in two flavours: during or after the boosting process. Applied among a crowd of Gentle Boost ensembles, the ability of the two suggested algorithms to generalize is inspected by comparing them against Subbagging and Gentle Boost on various real-world datasets. In both cases, our models obtained a 40% generalization error decrease. But their true ability to capture details in data was revealed through their application for protein detection in texture analysis of gel electrophoresis images. They achieve improved performance of approximately 0.9773 AUROC when compared to the AUROC of 0.9574 obtained by an SVM based on recursive feature elimination. PMID:28304378
Accurate predictions for the production of vaporized water
Morin, E.; Montel, F.
1995-12-31
The production of water vaporized in the gas phase is controlled by the local conditions around the wellbore. The pressure gradient applied to the formation creates a sharp increase of the molar water content in the hydrocarbon phase approaching the well; this leads to a drop in the pore water saturation around the wellbore. The extent of the dehydrated zone which is formed is the key controlling the bottom-hole content of vaporized water. The maximum water content in the hydrocarbon phase at a given pressure, temperature and salinity is corrected by capillarity or adsorption phenomena depending on the actual water saturation. Describing the mass transfer of the water between the hydrocarbon phases and the aqueous phase into the tubing gives a clear idea of vaporization effects on the formation of scales. Field example are presented for gas fields with temperatures ranging between 140{degrees}C and 180{degrees}C, where water vaporization effects are significant. Conditions for salt plugging in the tubing are predicted.
Generating highly accurate prediction hypotheses through collaborative ensemble learning
NASA Astrophysics Data System (ADS)
Arsov, Nino; Pavlovski, Martin; Basnarkov, Lasko; Kocarev, Ljupco
2017-03-01
Ensemble generation is a natural and convenient way of achieving better generalization performance of learning algorithms by gathering their predictive capabilities. Here, we nurture the idea of ensemble-based learning by combining bagging and boosting for the purpose of binary classification. Since the former improves stability through variance reduction, while the latter ameliorates overfitting, the outcome of a multi-model that combines both strives toward a comprehensive net-balancing of the bias-variance trade-off. To further improve this, we alter the bagged-boosting scheme by introducing collaboration between the multi-model’s constituent learners at various levels. This novel stability-guided classification scheme is delivered in two flavours: during or after the boosting process. Applied among a crowd of Gentle Boost ensembles, the ability of the two suggested algorithms to generalize is inspected by comparing them against Subbagging and Gentle Boost on various real-world datasets. In both cases, our models obtained a 40% generalization error decrease. But their true ability to capture details in data was revealed through their application for protein detection in texture analysis of gel electrophoresis images. They achieve improved performance of approximately 0.9773 AUROC when compared to the AUROC of 0.9574 obtained by an SVM based on recursive feature elimination.
Accurate prediction of wall shear stress in a stented artery: newtonian versus non-newtonian models.
Mejia, Juan; Mongrain, Rosaire; Bertrand, Olivier F
2011-07-01
A significant amount of evidence linking wall shear stress to neointimal hyperplasia has been reported in the literature. As a result, numerical and experimental models have been created to study the influence of stent design on wall shear stress. Traditionally, blood has been assumed to behave as a Newtonian fluid, but recently that assumption has been challenged. The use of a linear model; however, can reduce computational cost, and allow the use of Newtonian fluids (e.g., glycerine and water) instead of a blood analog fluid in an experimental setup. Therefore, it is of interest whether a linear model can be used to accurately predict the wall shear stress caused by a non-Newtonian fluid such as blood within a stented arterial segment. The present work compares the resulting wall shear stress obtained using two linear and one nonlinear model under the same flow waveform. All numerical models are fully three-dimensional, transient, and incorporate a realistic stent geometry. It is shown that traditional linear models (based on blood's lowest viscosity limit, 3.5 Pa s) underestimate the wall shear stress within a stented arterial segment, which can lead to an overestimation of the risk of restenosis. The second linear model, which uses a characteristic viscosity (based on an average strain rate, 4.7 Pa s), results in higher wall shear stress levels, but which are still substantially below those of the nonlinear model. It is therefore shown that nonlinear models result in more accurate predictions of wall shear stress within a stented arterial segment.
An accurate solution of elastodynamic problems by numerical local Green's functions
NASA Astrophysics Data System (ADS)
Loureiro, F. S.; Silva, J. E. A.; Mansur, W. J.
2015-09-01
Green's function based methodologies for elastodynamics in both time and frequency domains, which can be either numerical or analytical, appear in many branches of physics and engineering. Thus, the development of exact expressions for Green's functions is of great importance. Unfortunately, such expressions are known only for relatively few kinds of geometry, medium and boundary conditions. In this way, due to the difficulty in finding exact Green's functions, specially in the time domain, the present paper presents a solution of the transient elastodynamic equations by a time-stepping technique based on the Explicit Green's Approach method written in terms of the Green's and Step response functions, both being computed numerically by the finite element method. The major feature is the computation of these functions separately by the central difference time integration scheme and locally owing to the principle of causality. More precisely, Green's functions are computed only at t = Δt adopting two time substeps while Step response functions are computed directly without substeps. The proposed time-stepping method shows to be quite accurate with distinct numerical properties not presented in the standard central difference scheme as addressed in the numerical example.
Efficient and accurate numerical methods for the Klein-Gordon-Schroedinger equations
Bao, Weizhu . E-mail: bao@math.nus.edu.sg; Yang, Li . E-mail: yangli@nus.edu.sg
2007-08-10
In this paper, we present efficient, unconditionally stable and accurate numerical methods for approximations of the Klein-Gordon-Schroedinger (KGS) equations with/without damping terms. The key features of our methods are based on: (i) the application of a time-splitting spectral discretization for a Schroedinger-type equation in KGS (ii) the utilization of Fourier pseudospectral discretization for spatial derivatives in the Klein-Gordon equation in KGS (iii) the adoption of solving the ordinary differential equations (ODEs) in phase space analytically under appropriate chosen transmission conditions between different time intervals or applying Crank-Nicolson/leap-frog for linear/nonlinear terms for time derivatives. The numerical methods are either explicit or implicit but can be solved explicitly, unconditionally stable, and of spectral accuracy in space and second-order accuracy in time. Moreover, they are time reversible and time transverse invariant when there is no damping terms in KGS, conserve (or keep the same decay rate of) the wave energy as that in KGS without (or with a linear) damping term, keep the same dynamics of the mean value of the meson field, and give exact results for the plane-wave solution. Extensive numerical tests are presented to confirm the above properties of our numerical methods for KGS. Finally, the methods are applied to study solitary-wave collisions in one dimension (1D), as well as dynamics of a 2D problem in KGS.
Change in BMI Accurately Predicted by Social Exposure to Acquaintances
Oloritun, Rahman O.; Ouarda, Taha B. M. J.; Moturu, Sai; Madan, Anmol; Pentland, Alex (Sandy); Khayal, Inas
2013-01-01
Research has mostly focused on obesity and not on processes of BMI change more generally, although these may be key factors that lead to obesity. Studies have suggested that obesity is affected by social ties. However these studies used survey based data collection techniques that may be biased toward select only close friends and relatives. In this study, mobile phone sensing techniques were used to routinely capture social interaction data in an undergraduate dorm. By automating the capture of social interaction data, the limitations of self-reported social exposure data are avoided. This study attempts to understand and develop a model that best describes the change in BMI using social interaction data. We evaluated a cohort of 42 college students in a co-located university dorm, automatically captured via mobile phones and survey based health-related information. We determined the most predictive variables for change in BMI using the least absolute shrinkage and selection operator (LASSO) method. The selected variables, with gender, healthy diet category, and ability to manage stress, were used to build multiple linear regression models that estimate the effect of exposure and individual factors on change in BMI. We identified the best model using Akaike Information Criterion (AIC) and R2. This study found a model that explains 68% (p<0.0001) of the variation in change in BMI. The model combined social interaction data, especially from acquaintances, and personal health-related information to explain change in BMI. This is the first study taking into account both interactions with different levels of social interaction and personal health-related information. Social interactions with acquaintances accounted for more than half the variation in change in BMI. This suggests the importance of not only individual health information but also the significance of social interactions with people we are exposed to, even people we may not consider as close friends. PMID
Objective calibration of numerical weather prediction models
NASA Astrophysics Data System (ADS)
Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.
2017-07-01
Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.
NASA Astrophysics Data System (ADS)
Garrison, Stephen L.
2005-07-01
The combination of molecular simulations and potentials obtained from quantum chemistry is shown to be able to provide reasonably accurate thermodynamic property predictions. Gibbs ensemble Monte Carlo simulations are used to understand the effects of small perturbations to various regions of the model Lennard-Jones 12-6 potential. However, when the phase behavior and second virial coefficient are scaled by the critical properties calculated for each potential, the results obey a corresponding states relation suggesting a non-uniqueness problem for interaction potentials fit to experimental phase behavior. Several variations of a procedure collectively referred to as quantum mechanical Hybrid Methods for Interaction Energies (HM-IE) are developed and used to accurately estimate interaction energies from CCSD(T) calculations with a large basis set in a computationally efficient manner for the neon-neon, acetylene-acetylene, and nitrogen-benzene systems. Using these results and methods, an ab initio, pairwise-additive, site-site potential for acetylene is determined and then improved using results from molecular simulations using this initial potential. The initial simulation results also indicate that a limited range of energies important for accurate phase behavior predictions. Second virial coefficients calculated from the improved potential indicate that one set of experimental data in the literature is likely erroneous. This prescription is then applied to methanethiol. Difficulties in modeling the effects of the lone pair electrons suggest that charges on the lone pair sites negatively impact the ability of the intermolecular potential to describe certain orientations, but that the lone pair sites may be necessary to reasonably duplicate the interaction energies for several orientations. Two possible methods for incorporating the effects of three-body interactions into simulations within the pairwise-additivity formulation are also developed. A low density
Sub-kilometer Numerical Weather Prediction in complex urban areas
NASA Astrophysics Data System (ADS)
Leroyer, S.; Bélair, S.; Husain, S.; Vionnet, V.
2013-12-01
A Sub-kilometer atmospheric modeling system with grid-spacings of 2.5 km, 1 km and 250 m and including urban processes is currently being developed at the Meteorological Service of Canada (MSC) in order to provide more accurate weather forecasts at the city scale. Atmospheric lateral boundary conditions are provided with the 15-km Canadian Regional Deterministic Prediction System (RDPS). Surface physical processes are represented with the Town Energy Balance (TEB) model for the built-up covers and with the Interactions between the Surface, Biosphere, and Atmosphere (ISBA) land surface model for the natural covers. In this study, several research experiments over large metropolitan areas and using observational networks at the urban scale are presented, with a special emphasis on the representation of local atmospheric circulations and their impact on extreme weather forecasting. First, numerical simulations are performed over the Vancouver metropolitan area during a summertime Intense Observing Period (IOP of 14-15 August 2008) of the Environmental Prediction in Canadian Cities (EPiCC) observational network. The influence of the horizontal resolution on the fine-scale representation of the sea-breeze development over the city is highlighted (Leroyer et al., 2013). Then severe storms cases occurring in summertime within the Greater Toronto Area (GTA) are simulated. In view of supporting the 2015 PanAmerican and Para-Pan games to be hold in GTA, a dense observational network has been recently deployed over this region to support model evaluations at the urban and meso scales. In particular, simulations are conducted for the case of 8 July 2013 when exceptional rainfalls were recorded. Leroyer, S., S. Bélair, J. Mailhot, S.Z. Husain, 2013: Sub-kilometer Numerical Weather Prediction in an Urban Coastal Area: A case study over the Vancouver Metropolitan Area, submitted to Journal of Applied Meteorology and Climatology.
A novel numerical technique to obtain an accurate solution to the Thomas-Fermi equation
NASA Astrophysics Data System (ADS)
Parand, Kourosh; Yousefi, Hossein; Delkhosh, Mehdi; Ghaderi, Amin
2016-07-01
In this paper, a new algorithm based on the fractional order of rational Euler functions (FRE) is introduced to study the Thomas-Fermi (TF) model which is a nonlinear singular ordinary differential equation on a semi-infinite interval. This problem, using the quasilinearization method (QLM), converts to the sequence of linear ordinary differential equations to obtain the solution. For the first time, the rational Euler (RE) and the FRE have been made based on Euler polynomials. In addition, the equation will be solved on a semi-infinite domain without truncating it to a finite domain by taking FRE as basic functions for the collocation method. This method reduces the solution of this problem to the solution of a system of algebraic equations. We demonstrated that the new proposed algorithm is efficient for obtaining the value of y'(0) , y(x) and y'(x) . Comparison with some numerical and analytical solutions shows that the present solution is highly accurate.
Takahashi, F; Endo, A
2007-01-01
A system utilising radiation transport codes has been developed to derive accurate dose distributions in a human body for radiological accidents. A suitable model is quite essential for a numerical analysis. Therefore, two tools were developed to setup a 'problem-dependent' input file, defining a radiation source and an exposed person to simulate the radiation transport in an accident with the Monte Carlo calculation codes-MCNP and MCNPX. Necessary resources are defined by a dialogue method with a generally used personal computer for both the tools. The tools prepare human body and source models described in the input file format of the employed Monte Carlo codes. The tools were validated for dose assessment in comparison with a past criticality accident and a hypothesized exposure.
Numerical prediction of turbulent oscillating flow and associated heat transfer
NASA Technical Reports Server (NTRS)
Koehler, W. J.; Patankar, S. V.; Ibele, W. E.
1991-01-01
A crucial point for further development of engines is the optimization of its heat exchangers which operate under oscillatory flow conditions. It has been found that the most important thermodynamic uncertainties in the Stirling engine designs for space power are in the heat transfer between gas and metal in all engine components and in the pressure drop across the heat exchanger components. So far, performance codes cannot predict the power output of a Stirling engine reasonably enough if used for a wide variety of engines. Thus, there is a strong need for better performance codes. However, a performance code is not concerned with the details of the flow. This information must be provided externally. While analytical relationships exist for laminar oscillating flow, there has been hardly any information about transitional and turbulent oscillating flow, which could be introduced into the performance codes. In 1986, a survey by Seume and Simon revealed that most Stirling engine heat exchangers operate in the transitional and turbulent regime. Consequently, research has since focused on the unresolved issue of transitional and turbulent oscillating flow and heat transfer. Since 1988, the University of Minnesota oscillating flow facility has obtained experimental data about transitional and turbulent oscillating flow. However, since the experiments in this field are extremely difficult, lengthy, and expensive, it is advantageous to numerically simulate the flow and heat transfer accurately from first principles. Work done at the University of Minnesota on the development of such a numerical simulation is summarized.
PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release
NASA Astrophysics Data System (ADS)
Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.
2016-09-01
The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.
Keeping the edge: an accurate numerical method to solve the stream power law
NASA Astrophysics Data System (ADS)
Campforts, B.; Govers, G.
2015-12-01
Bedrock rivers set the base level of surrounding hill slopes and mediate the dynamic interplay between mountain building and denudation. The propensity of rivers to preserve pulses of increased tectonic uplift also allows to reconstruct long term uplift histories from longitudinal river profiles. An accurate reconstruction of river profile development at different timescales is therefore essential. Long term river development is typically modeled by means of the stream power law. Under specific conditions this equation can be solved analytically but numerical Finite Difference Methods (FDMs) are most frequently used. Nonetheless, FDMs suffer from numerical smearing, especially at knickpoint zones which are key to understand transient landscapes. Here, we solve the stream power law by means of a Finite Volume Method (FVM) which is Total Variation Diminishing (TVD). Total volume methods are designed to simulate sharp discontinuities making them very suitable to model river incision. In contrast to FDMs, the TVD_FVM is well capable of preserving knickpoints as illustrated for the fast propagating Niagara falls. Moreover, we show that the TVD_FVM performs much better when reconstructing uplift at timescales exceeding 100 Myr, using Eastern Australia as an example. Finally, uncertainty associated with parameter calibration is dramatically reduced when the TVD_FVM is applied. Therefore, the use of a TVD_FVM to understand long term landscape evolution is an important addition to the toolbox at the disposition of geomorphologists.
Numerical Weather Prediction and Satellite Observations.
1985-08-01
the report of an approaching tornado made by storm spotters stationed around a city or town. Approach of a flash flood on a stream or small river is...uses will include the prediction of hurricanes, heavy precipitation, flash floods , squall lines, and clusters of thunderstorms. An interval of 5 km
Numerical simulation for fan broadband noise prediction
NASA Astrophysics Data System (ADS)
Hase, Takaaki; Yamasaki, Nobuhiko; Ooishi, Tsutomu
2011-03-01
In order to elucidate the broadband noise of fan, the numerical simulation of fan operating at two different rotational speeds is carried out using the three-dimensional unsteady Reynolds-averaged Navier-Stokes (URANS) equations. The computed results are compared to experiment to estimate its accuracy and are found to show good agreement with experiment. A method is proposed to evaluate the turbulent kinetic energy in the framework of the Spalart-Allmaras one equation turbulence model. From the calculation results, the turbulent kinetic energy is visualized as the turbulence of the flow which leads to generate the broadband noise, and its noise sources are identified.
Numerical prediction of flow in slender vortices
NASA Technical Reports Server (NTRS)
Reyna, Luis G.; Menne, Stefan
1988-01-01
The slender vortex approximation was investigated using the Navier-Stokes equations written in cylindrical coordinates. It is shown that, for free vortices without external pressure gradient, the breakdown length is proportional to the Reynolds number. For free vortices with adverse pressure gradients, the breakdown length is inversely proportional to the value of its gradient. For low Reynolds numbers, the predictions of the simplified system agreed well with the ones obtained from solutions of the full Navier-Stokes equations, whereas for high Reynolds numbers, the flow became quite sensitive to pressure fluctuations; it was found that the failure of the slender vortex equations corresponded to the critical condition as identified by Benjamin (1962) for inviscid flows. The predictions obtained from the approximating system were compared with available experimental results. For low swirl, a good agreement was obtained; for high swirl, on the other hand, upstream effects on the pressure gradient produced by the breakdown bubble caused poor agreement.
Orbital Advection by Interpolation: A Fast and Accurate Numerical Scheme for Super-Fast MHD Flows
Johnson, B M; Guan, X; Gammie, F
2008-04-11
In numerical models of thin astrophysical disks that use an Eulerian scheme, gas orbits supersonically through a fixed grid. As a result the timestep is sharply limited by the Courant condition. Also, because the mean flow speed with respect to the grid varies with position, the truncation error varies systematically with position. For hydrodynamic (unmagnetized) disks an algorithm called FARGO has been developed that advects the gas along its mean orbit using a separate interpolation substep. This relaxes the constraint imposed by the Courant condition, which now depends only on the peculiar velocity of the gas, and results in a truncation error that is more nearly independent of position. This paper describes a FARGO-like algorithm suitable for evolving magnetized disks. Our method is second order accurate on a smooth flow and preserves {del} {center_dot} B = 0 to machine precision. The main restriction is that B must be discretized on a staggered mesh. We give a detailed description of an implementation of the code and demonstrate that it produces the expected results on linear and nonlinear problems. We also point out how the scheme might be generalized to make the integration of other supersonic/super-fast flows more efficient. Although our scheme reduces the variation of truncation error with position, it does not eliminate it. We show that the residual position dependence leads to characteristic radial variations in the density over long integrations.
Advanced numerical techniques for accurate unsteady simulations of a wingtip vortex
NASA Astrophysics Data System (ADS)
Ahmad, Shakeel
A numerical technique is developed to simulate the vortices associated with stationary and flapping wings. The Unsteady Reynolds-Averaged Navier-Stokes (URANS) equations are used over an unstructured grid. The present work assesses the locations of the origins of vortex generation, models those locations and develops a systematic mesh refinement strategy to simulate vortices more accurately using the URANS model. The vortex center plays a key role in the analysis of the simulation data. A novel approach to locating a vortex center is also developed referred to as the Max-Max criterion. Experimental validation of the simulated vortex from a stationary NACA0012 wing is achieved. The tangential velocity along the core of the vortex falls within five percent of the experimental data in the case of the stationary NACA0012 simulation. The wing surface pressure coefficient also matches with the experimental data. The refinement techniques are then focused on unsteady simulations of pitching and dual-mode wing flapping. Tip vortex strength, location, and wing surface pressure are analyzed. Links to vortex behavior and wing motion are inferred. Key words: vortex, tangential velocity, Cp, vortical flow, unsteady vortices, URANS, Max-Max, Vortex center
A high order accurate finite element algorithm for high Reynolds number flow prediction
NASA Technical Reports Server (NTRS)
Baker, A. J.
1978-01-01
A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.
Energy expenditure during level human walking: seeking a simple and accurate predictive solution.
Ludlow, Lindsay W; Weyand, Peter G
2016-03-01
Accurate prediction of the metabolic energy that walking requires can inform numerous health, bodily status, and fitness outcomes. We adopted a two-step approach to identifying a concise, generalized equation for predicting level human walking metabolism. Using literature-aggregated values we compared 1) the predictive accuracy of three literature equations: American College of Sports Medicine (ACSM), Pandolf et al., and Height-Weight-Speed (HWS); and 2) the goodness-of-fit possible from one- vs. two-component descriptions of walking metabolism. Literature metabolic rate values (n = 127; speed range = 0.4 to 1.9 m/s) were aggregated from 25 subject populations (n = 5-42) whose means spanned a 1.8-fold range of heights and a 4.2-fold range of weights. Population-specific resting metabolic rates (V̇o2 rest) were determined using standardized equations. Our first finding was that the ACSM and Pandolf et al. equations underpredicted nearly all 127 literature-aggregated values. Consequently, their standard errors of estimate (SEE) were nearly four times greater than those of the HWS equation (4.51 and 4.39 vs. 1.13 ml O2·kg(-1)·min(-1), respectively). For our second comparison, empirical best-fit relationships for walking metabolism were derived from the data set in one- and two-component forms for three V̇o2-speed model types: linear (∝V(1.0)), exponential (∝V(2.0)), and exponential/height (∝V(2.0)/Ht). We found that the proportion of variance (R(2)) accounted for, when averaged across the three model types, was substantially lower for one- vs. two-component versions (0.63 ± 0.1 vs. 0.90 ± 0.03) and the predictive errors were nearly twice as great (SEE = 2.22 vs. 1.21 ml O2·kg(-1)·min(-1)). Our final analysis identified the following concise, generalized equation for predicting level human walking metabolism: V̇o2 total = V̇o2 rest + 3.85 + 5.97·V(2)/Ht (where V is measured in m/s, Ht in meters, and V̇o2 in ml O2·kg(-1)·min(-1)).
Numerical prediction of Pelton turbine efficiency
NASA Astrophysics Data System (ADS)
Jošt, D.; Mežnar, P.; Lipej, A.
2010-08-01
This paper presents a numerical analysis of flow in a 2 jet Pelton turbine with horizontal axis. The analysis was done for the model at several operating points in different operating regimes. The results were compared to the results of a test of the model. Analysis was performed using ANSYS CFX-12.1 computer code. A k-ω SST turbulent model was used. Free surface flow was modelled by two-phase homogeneous model. At first, a steady state analysis of flow in the distributor with two injectors was performed for several needle strokes. This provided us with data on flow energy losses in the distributor and the shape and velocity of jets. The second step was an unsteady analysis of the runner with jets. Torque on the shaft was then calculated from pressure distribution data. Averaged torque values are smaller than measured ones. Consequently, calculated turbine efficiency is also smaller than the measured values, the difference is about 4 %. The shape of the efficiency diagram conforms well to the measurements.
NASA Astrophysics Data System (ADS)
Rey, M.; Nikitin, A. V.; Tyuterev, V.
2014-06-01
Knowledge of near infrared intensities of rovibrational transitions of polyatomic molecules is essential for the modeling of various planetary atmospheres, brown dwarfs and for other astrophysical applications 1,2,3. For example, to analyze exoplanets, atmospheric models have been developed, thus making the need to provide accurate spectroscopic data. Consequently, the spectral characterization of such planetary objects relies on the necessity of having adequate and reliable molecular data in extreme conditions (temperature, optical path length, pressure). On the other hand, in the modeling of astrophysical opacities, millions of lines are generally involved and the line-by-line extraction is clearly not feasible in laboratory measurements. It is thus suggested that this large amount of data could be interpreted only by reliable theoretical predictions. There exists essentially two theoretical approaches for the computation and prediction of spectra. The first one is based on empirically-fitted effective spectroscopic models. Another way for computing energies, line positions and intensities is based on global variational calculations using ab initio surfaces. They do not yet reach the spectroscopic accuracy stricto sensu but implicitly account for all intramolecular interactions including resonance couplings in a wide spectral range. The final aim of this work is to provide reliable predictions which could be quantitatively accurate with respect to the precision of available observations and as complete as possible. All this thus requires extensive first-principles quantum mechanical calculations essentially based on three necessary ingredients which are (i) accurate intramolecular potential energy surface and dipole moment surface components well-defined in a large range of vibrational displacements and (ii) efficient computational methods combined with suitable choices of coordinates to account for molecular symmetry properties and to achieve a good numerical
Behavior Laws And Their Influences On Numerical Prediction
Lemoine, Xavier
2007-04-07
Many studies show that the improvement of the forming numerical prediction for rolled sheets is done through laws of increasingly complex behavior, in particular by combination of the isotropic and kinematic hardening (mixed hardening) to take account of the Baushinger effect. This present work classifies the steel grades compared to the Baushinger effect. For some forming cases, it shows also the influence of a mixed hardening law on this numerical prediction, in term of deformation, thinning, residual stresses, and punch force..
Spray combustion experiments and numerical predictions
NASA Technical Reports Server (NTRS)
Mularz, Edward J.; Bulzan, Daniel L.; Chen, Kuo-Huey
1993-01-01
The next generation of commercial aircraft will include turbofan engines with performance significantly better than those in the current fleet. Control of particulate and gaseous emissions will also be an integral part of the engine design criteria. These performance and emission requirements present a technical challenge for the combustor: control of the fuel and air mixing and control of the local stoichiometry will have to be maintained much more rigorously than with combustors in current production. A better understanding of the flow physics of liquid fuel spray combustion is necessary. This paper describes recent experiments on spray combustion where detailed measurements of the spray characteristics were made, including local drop-size distributions and velocities. Also, an advanced combustor CFD code has been under development and predictions from this code are compared with experimental results. Studies such as these will provide information to the advanced combustor designer on fuel spray quality and mixing effectiveness. Validation of new fast, robust, and efficient CFD codes will also enable the combustor designer to use them as additional design tools for optimization of combustor concepts for the next generation of aircraft engines.
Numerical Prediction of Dust. Chapter 10
NASA Technical Reports Server (NTRS)
Benedetti, Angela; Baldasano, J. M.; Basart, S.; Benincasa, F.; Boucher, O.; Brooks, M.; Chen, J. P.; Colarco, P. R.; Gong, S.; Huneeus, N.; Jones, L; Lu, S.; Menut, L.; Mulcahy, J.; Nickovic, S.; Morcrette, J.-J.; Perez, C.; Reid, J. S.; Sekiyama, T. T.; Tanaka, T.; Terradellas, E.; Westphal, D. L.; Zhang, X.-Y.; Zhou, C.-H.
2013-01-01
. Scientific observations and results are presented, along with numerous illustrations. This work has an interdisciplinary appeal and will engage scholars in geology, geography, chemistry, meteorology and physics, amongst others with an interest in the Earth system and environmental change.
Accurate Prediction of One-Dimensional Protein Structure Features Using SPINE-X.
Faraggi, Eshel; Kloczkowski, Andrzej
2017-01-01
Accurate prediction of protein secondary structure and other one-dimensional structure features is essential for accurate sequence alignment, three-dimensional structure modeling, and function prediction. SPINE-X is a software package to predict secondary structure as well as accessible surface area and dihedral angles ϕ and ψ. For secondary structure SPINE-X achieves an accuracy of between 81 and 84 % depending on the dataset and choice of tests. The Pearson correlation coefficient for accessible surface area prediction is 0.75 and the mean absolute error from the ϕ and ψ dihedral angles are 20(∘) and 33(∘), respectively. The source code and a Linux executables for SPINE-X are available from Research and Information Systems at http://mamiris.com .
Accurate numerical forward model for optimal retracking of SIRAL2 SAR echoes over open ocean
NASA Astrophysics Data System (ADS)
Phalippou, L.; Demeestere, F.
2011-12-01
The SAR mode of SIRAL-2 on board Cryosat-2 has been designed to measure primarily sea-ice and continental ice (Wingham et al. 2005). In 2005, K. Raney (KR, 2005) pointed out the improvements brought by SAR altimeter for open ocean. KR results were mostly based on 'rule of thumb' considerations on speckle noise reduction due to the higher PRF and to speckle decorrelation after SAR processing. In 2007, Phalippou and Enjolras (PE,2007) provided the theoretical background for optimal retracking of SAR echoes over ocean with a focus on the forward modelling of the power-waveforms. The accuracies of geophysical parameters (range, significant wave heights, and backscattering coefficient) retrieved from SAR altimeter data were derived accounting for SAR echo shape and speckle noise accurate modelling. The step forward to optimal retracking using numerical forward model (NFM) was also pointed out. NFM of the power waveform avoids analytical approximation, a warranty to minimise the geophysical dependent biases in the retrieval. NFM have been used for many years, in operational meteorology in particular, for retrieving temperature and humidity profiles from IR and microwave radiometers as the radiative transfer function is complex (Eyre, 1989). So far this technique was not used in the field of ocean conventional altimetry as analytical models (e.g. Brown's model for instance) were found to give sufficient accuracy. However, although NFM seems desirable even for conventional nadir altimetry, it becomes inevitable if one wish to process SAR altimeter data as the transfer function is too complex to be approximated by a simple analytical function. This was clearly demonstrated in PE 2007. The paper describes the background to SAR data retracking over open ocean. Since PE 2007 improvements have been brought to the forward model and it is shown that the altimeter on-ground and in flight characterisation (e.g antenna pattern range impulse response, azimuth impulse response
Vincent, Mark A; Hillier, Ian H
2014-08-25
The accurate prediction of the adsorption energies of unsaturated molecules on graphene in the presence of water is essential for the design of molecules that can modify its properties and that can aid its processability. We here show that a semiempirical MO method corrected for dispersive interactions (PM6-DH2) can predict the adsorption energies of unsaturated hydrocarbons and the effect of substitution on these values to an accuracy comparable to DFT values and in good agreement with the experiment. The adsorption energies of TCNE, TCNQ, and a number of sulfonated pyrenes are also predicted, along with the effect of hydration using the COSMO model.
Accurately predicting copper interconnect topographies in foundry design for manufacturability flows
NASA Astrophysics Data System (ADS)
Lu, Daniel; Fan, Zhong; Tak, Ki Duk; Chang, Li-Fu; Zou, Elain; Jiang, Jenny; Yang, Josh; Zhuang, Linda; Chen, Kuang Han; Hurat, Philippe; Ding, Hua
2011-04-01
This paper presents a model-based Chemical Mechanical Polishing (CMP) Design for Manufacturability (DFM) () methodology that includes an accurate prediction of post-CMP copper interconnect topographies at the advanced process technology nodes. Using procedures of extensive model calibration and validation, the CMP process model accurately predicts post-CMP dimensions, such as erosion, dishing, and copper thickness with excellent correlation to silicon measurements. This methodology provides an efficient DFM flow to detect and fix physical manufacturing hotspots related to copper pooling and Depth of Focus (DOF) failures at both block- and full chip level designs. Moreover, the predicted thickness output is used in the CMP-aware RC extraction and Timing analysis flows for better understanding of performance yield and timing impact. In addition, the CMP model can be applied to the verification of model-based dummy fill flows.
Cas9-chromatin binding information enables more accurate CRISPR off-target prediction
Singh, Ritambhara; Kuscu, Cem; Quinlan, Aaron; Qi, Yanjun; Adli, Mazhar
2015-01-01
The CRISPR system has become a powerful biological tool with a wide range of applications. However, improving targeting specificity and accurately predicting potential off-targets remains a significant goal. Here, we introduce a web-based CRISPR/Cas9 Off-target Prediction and Identification Tool (CROP-IT) that performs improved off-target binding and cleavage site predictions. Unlike existing prediction programs that solely use DNA sequence information; CROP-IT integrates whole genome level biological information from existing Cas9 binding and cleavage data sets. Utilizing whole-genome chromatin state information from 125 human cell types further enhances its computational prediction power. Comparative analyses on experimentally validated datasets show that CROP-IT outperforms existing computational algorithms in predicting both Cas9 binding as well as cleavage sites. With a user-friendly web-interface, CROP-IT outputs scored and ranked list of potential off-targets that enables improved guide RNA design and more accurate prediction of Cas9 binding or cleavage sites. PMID:26032770
Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L
2016-08-01
Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises.
An effective method for accurate prediction of the first hyperpolarizability of alkalides.
Wang, Jia-Nan; Xu, Hong-Liang; Sun, Shi-Ling; Gao, Ting; Li, Hong-Zhi; Li, Hui; Su, Zhong-Min
2012-01-15
The proper theoretical calculation method for nonlinear optical (NLO) properties is a key factor to design the excellent NLO materials. Yet it is a difficult task to obatin the accurate NLO property of large scale molecule. In present work, an effective intelligent computing method, as called extreme learning machine-neural network (ELM-NN), is proposed to predict accurately the first hyperpolarizability (β(0)) of alkalides from low-accuracy first hyperpolarizability. Compared with neural network (NN) and genetic algorithm neural network (GANN), the root-mean-square deviations of the predicted values obtained by ELM-NN, GANN, and NN with their MP2 counterpart are 0.02, 0.08, and 0.17 a.u., respectively. It suggests that the predicted values obtained by ELM-NN are more accurate than those calculated by NN and GANN methods. Another excellent point of ELM-NN is the ability to obtain the high accuracy level calculated values with less computing cost. Experimental results show that the computing time of MP2 is 2.4-4 times of the computing time of ELM-NN. Thus, the proposed method is a potentially powerful tool in computational chemistry, and it may predict β(0) of the large scale molecules, which is difficult to obtain by high-accuracy theoretical method due to dramatic increasing computational cost.
Towards Bridging the Gaps in Holistic Transition Prediction via Numerical Simulations
NASA Technical Reports Server (NTRS)
Choudhari, Meelan M.; Li, Fei; Duan, Lian; Chang, Chau-Lyan; Carpenter, Mark H.; Streett, Craig L.; Malik, Mujeeb R.
2013-01-01
The economic and environmental benefits of laminar flow technology via reduced fuel burn of subsonic and supersonic aircraft cannot be realized without minimizing the uncertainty in drag prediction in general and transition prediction in particular. Transition research under NASA's Aeronautical Sciences Project seeks to develop a validated set of variable fidelity prediction tools with known strengths and limitations, so as to enable "sufficiently" accurate transition prediction and practical transition control for future vehicle concepts. This paper provides a summary of selected research activities targeting the current gaps in high-fidelity transition prediction, specifically those related to the receptivity and laminar breakdown phases of crossflow induced transition in a subsonic swept-wing boundary layer. The results of direct numerical simulations are used to obtain an enhanced understanding of the laminar breakdown region as well as to validate reduced order prediction methods.
Hash: a Program to Accurately Predict Protein Hα Shifts from Neighboring Backbone Shifts3
Zeng, Jianyang; Zhou, Pei; Donald, Bruce Randall
2012-01-01
Chemical shifts provide not only peak identities for analyzing NMR data, but also an important source of conformational information for studying protein structures. Current structural studies requiring Hα chemical shifts suffer from the following limitations. (1) For large proteins, the Hα chemical shifts can be difficult to assign using conventional NMR triple-resonance experiments, mainly due to the fast transverse relaxation rate of Cα that restricts the signal sensitivity. (2) Previous chemical shift prediction approaches either require homologous models with high sequence similarity or rely heavily on accurate backbone and side-chain structural coordinates. When neither sequence homologues nor structural coordinates are available, we must resort to other information to predict Hα chemical shifts. Predicting accurate Hα chemical shifts using other obtainable information, such as the chemical shifts of nearby backbone atoms (i.e., adjacent atoms in the sequence), can remedy the above dilemmas, and hence advance NMR-based structural studies of proteins. By specifically exploiting the dependencies on chemical shifts of nearby backbone atoms, we propose a novel machine learning algorithm, called Hash, to predict Hα chemical shifts. Hash combines a new fragment-based chemical shift search approach with a non-parametric regression model, called the generalized additive model, to effectively solve the prediction problem. We demonstrate that the chemical shifts of nearby backbone atoms provide a reliable source of information for predicting accurate Hα chemical shifts. Our testing results on different possible combinations of input data indicate that Hash has a wide rage of potential NMR applications in structural and biological studies of proteins. PMID:23242797
Accurate Prediction of Ligand Affinities for a Proton-Dependent Oligopeptide Transporter
Samsudin, Firdaus; Parker, Joanne L.; Sansom, Mark S.P.; Newstead, Simon; Fowler, Philip W.
2016-01-01
Summary Membrane transporters are critical modulators of drug pharmacokinetics, efficacy, and safety. One example is the proton-dependent oligopeptide transporter PepT1, also known as SLC15A1, which is responsible for the uptake of the β-lactam antibiotics and various peptide-based prodrugs. In this study, we modeled the binding of various peptides to a bacterial homolog, PepTSt, and evaluated a range of computational methods for predicting the free energy of binding. Our results show that a hybrid approach (endpoint methods to classify peptides into good and poor binders and a theoretically exact method for refinement) is able to accurately predict affinities, which we validated using proteoliposome transport assays. Applying the method to a homology model of PepT1 suggests that the approach requires a high-quality structure to be accurate. Our study provides a blueprint for extending these computational methodologies to other pharmaceutically important transporter families. PMID:27028887
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
Fast and Accurate Prediction of Stratified Steel Temperature During Holding Period of Ladle
NASA Astrophysics Data System (ADS)
Deodhar, Anirudh; Singh, Umesh; Shukla, Rishabh; Gautham, B. P.; Singh, Amarendra K.
2017-04-01
Thermal stratification of liquid steel in a ladle during the holding period and the teeming operation has a direct bearing on the superheat available at the caster and hence on the caster set points such as casting speed and cooling rates. The changes in the caster set points are typically carried out based on temperature measurements at the end of tundish outlet. Thermal prediction models provide advance knowledge of the influence of process and design parameters on the steel temperature at various stages. Therefore, they can be used in making accurate decisions about the caster set points in real time. However, this requires both fast and accurate thermal prediction models. In this work, we develop a surrogate model for the prediction of thermal stratification using data extracted from a set of computational fluid dynamics (CFD) simulations, pre-determined using design of experiments technique. Regression method is used for training the predictor. The model predicts the stratified temperature profile instantaneously, for a given set of process parameters such as initial steel temperature, refractory heat content, slag thickness, and holding time. More than 96 pct of the predicted values are within an error range of ±5 K (±5 °C), when compared against corresponding CFD results. Considering its accuracy and computational efficiency, the model can be extended for thermal control of casting operations. This work also sets a benchmark for developing similar thermal models for downstream processes such as tundish and caster.
Fast and Accurate Prediction of Stratified Steel Temperature During Holding Period of Ladle
NASA Astrophysics Data System (ADS)
Deodhar, Anirudh; Singh, Umesh; Shukla, Rishabh; Gautham, B. P.; Singh, Amarendra K.
2016-12-01
Thermal stratification of liquid steel in a ladle during the holding period and the teeming operation has a direct bearing on the superheat available at the caster and hence on the caster set points such as casting speed and cooling rates. The changes in the caster set points are typically carried out based on temperature measurements at the end of tundish outlet. Thermal prediction models provide advance knowledge of the influence of process and design parameters on the steel temperature at various stages. Therefore, they can be used in making accurate decisions about the caster set points in real time. However, this requires both fast and accurate thermal prediction models. In this work, we develop a surrogate model for the prediction of thermal stratification using data extracted from a set of computational fluid dynamics (CFD) simulations, pre-determined using design of experiments technique. Regression method is used for training the predictor. The model predicts the stratified temperature profile instantaneously, for a given set of process parameters such as initial steel temperature, refractory heat content, slag thickness, and holding time. More than 96 pct of the predicted values are within an error range of ±5 K (±5 °C), when compared against corresponding CFD results. Considering its accuracy and computational efficiency, the model can be extended for thermal control of casting operations. This work also sets a benchmark for developing similar thermal models for downstream processes such as tundish and caster.
Can phenological models predict tree phenology accurately under climate change conditions?
NASA Astrophysics Data System (ADS)
Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry
2014-05-01
The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay
AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)
A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...
Testani, Jeffrey M.; Hanberg, Jennifer S.; Cheng, Susan; Rao, Veena; Onyebeke, Chukwuma; Laur, Olga; Kula, Alexander; Chen, Michael; Wilson, F. Perry; Darlington, Andrew; Bellumkonda, Lavanya; Jacoby, Daniel; Tang, W. H. Wilson; Parikh, Chirag R.
2015-01-01
Background Removal of excess sodium and fluid is a primary therapeutic objective in acute decompensated heart failure (ADHF) and commonly monitored with fluid balance and weight loss. However, these parameters are frequently inaccurate or not collected and require a delay of several hours after diuretic administration before they are available. Accessible tools for rapid and accurate prediction of diuretic response are needed. Methods and Results Based on well-established renal physiologic principles an equation was derived to predict net sodium output using a spot urine sample obtained one or two hours following loop diuretic administration. This equation was then prospectively validated in 50 ADHF patients using meticulously obtained timed 6-hour urine collections to quantitate loop diuretic induced cumulative sodium output. Poor natriuretic response was defined as a cumulative sodium output of <50 mmol, a threshold that would result in a positive sodium balance with twice-daily diuretic dosing. Following a median dose of 3 mg (2–4 mg) of intravenous bumetanide, 40% of the population had a poor natriuretic response. The correlation between measured and predicted sodium output was excellent (r=0.91, p<0.0001). Poor natriuretic response could be accurately predicted with the sodium prediction equation (AUC=0.95, 95% CI 0.89–1.0, p<0.0001). Clinically recorded net fluid output had a weaker correlation (r=0.66, p<0.001) and lesser ability to predict poor natriuretic response (AUC=0.76, 95% CI 0.63–0.89, p=0.002). Conclusions In patients being treated for ADHF, poor natriuretic response can be predicted soon after diuretic administration with excellent accuracy using a spot urine sample. PMID:26721915
Ida, Masato; Taniguchi, Nobuyuki
2003-09-01
This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.
2015-01-01
Background Biclustering is a popular method for identifying under which experimental conditions biological signatures are co-expressed. However, the general biclustering problem is NP-hard, offering room to focus algorithms on specific biological tasks. We hypothesize that conditional co-regulation of genes is a key factor in determining cell phenotype and that accurately segregating conditions in biclusters will improve such predictions. Thus, we developed a bicluster sampled coherence metric (BSCM) for determining which conditions and signals should be included in a bicluster. Results Our BSCM calculates condition and cluster size specific p-values, and we incorporated these into the popular integrated biclustering algorithm cMonkey. We demonstrate that incorporation of our new algorithm significantly improves bicluster co-regulation scores (p-value = 0.009) and GO annotation scores (p-value = 0.004). Additionally, we used a bicluster based signal to predict whether a given experimental condition will result in yeast peroxisome induction. Using the new algorithm, the classifier accuracy improves from 41.9% to 76.1% correct. Conclusions We demonstrate that the proposed BSCM helps determine which signals ought to be co-clustered, resulting in more accurately assigned bicluster membership. Furthermore, we show that BSCM can be extended to more accurately detect under which experimental conditions the genes are co-clustered. Features derived from this more accurate analysis of conditional regulation results in a dramatic improvement in the ability to predict a cellular phenotype in yeast. The latest cMonkey is available for download at https://github.com/baliga-lab/cmonkey2. The experimental data and source code featured in this paper is available http://AitchisonLab.com/BSCM. BSCM has been incorporated in the official cMonkey release. PMID:25881257
Kieslich, Chris A; Tamamis, Phanourios; Guzman, Yannis A; Onel, Melis; Floudas, Christodoulos A
2016-01-01
HIV-1 entry into host cells is mediated by interactions between the V3-loop of viral glycoprotein gp120 and chemokine receptor CCR5 or CXCR4, collectively known as HIV-1 coreceptors. Accurate genotypic prediction of coreceptor usage is of significant clinical interest and determination of the factors driving tropism has been the focus of extensive study. We have developed a method based on nonlinear support vector machines to elucidate the interacting residue pairs driving coreceptor usage and provide highly accurate coreceptor usage predictions. Our models utilize centroid-centroid interaction energies from computationally derived structures of the V3-loop:coreceptor complexes as primary features, while additional features based on established rules regarding V3-loop sequences are also investigated. We tested our method on 2455 V3-loop sequences of various lengths and subtypes, and produce a median area under the receiver operator curve of 0.977 based on 500 runs of 10-fold cross validation. Our study is the first to elucidate a small set of specific interacting residue pairs between the V3-loop and coreceptors capable of predicting coreceptor usage with high accuracy across major HIV-1 subtypes. The developed method has been implemented as a web tool named CRUSH, CoReceptor USage prediction for HIV-1, which is available at http://ares.tamu.edu/CRUSH/.
Accurate similarity index based on activity and connectivity of node for link prediction
NASA Astrophysics Data System (ADS)
Li, Longjie; Qian, Lvjian; Wang, Xiaoping; Luo, Shishun; Chen, Xiaoyun
2015-05-01
Recent years have witnessed the increasing of available network data; however, much of those data is incomplete. Link prediction, which can find the missing links of a network, plays an important role in the research and analysis of complex networks. Based on the assumption that two unconnected nodes which are highly similar are very likely to have an interaction, most of the existing algorithms solve the link prediction problem by computing nodes' similarities. The fundamental requirement of those algorithms is accurate and effective similarity indices. In this paper, we propose a new similarity index, namely similarity based on activity and connectivity (SAC), which performs link prediction more accurately. To compute the similarity between two nodes, this index employs the average activity of these two nodes in their common neighborhood and the connectivities between them and their common neighbors. The higher the average activity is and the stronger the connectivities are, the more similar the two nodes are. The proposed index not only commendably distinguishes the contributions of paths but also incorporates the influence of endpoints. Therefore, it can achieve a better predicting result. To verify the performance of SAC, we conduct experiments on 10 real-world networks. Experimental results demonstrate that SAC outperforms the compared baselines.
NASA Astrophysics Data System (ADS)
Kuo, K. A.; Verbraken, H.; Degrande, G.; Lombaert, G.
2016-07-01
Along with the rapid expansion of urban rail networks comes the need for accurate predictions of railway induced vibration levels at grade and in buildings. Current computational methods for making predictions of railway induced ground vibration rely on simplifying modelling assumptions and require detailed parameter inputs, which lead to high levels of uncertainty. It is possible to mitigate against these issues using a combination of field measurements and state-of-the-art numerical methods, known as a hybrid model. In this paper, two hybrid models are developed, based on the use of separate source and propagation terms that are quantified using in situ measurements or modelling results. These models are implemented using term definitions proposed by the Federal Railroad Administration and assessed using the specific illustration of a surface railway. It is shown that the limitations of numerical and empirical methods can be addressed in a hybrid procedure without compromising prediction accuracy.
Towards more accurate wind and solar power prediction by improving NWP model physics
NASA Astrophysics Data System (ADS)
Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo
2014-05-01
The growing importance and successive expansion of renewable energies raise new challenges for decision makers, economists, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the errors and provide an a priori estimate of remaining uncertainties associated with the large share of weather-dependent power sources. For this purpose it is essential to optimize NWP model forecasts with respect to those prognostic variables which are relevant for wind and solar power plants. An improved weather forecast serves as the basis for a sophisticated power forecasts. Consequently, a well-timed energy trading on the stock market, and electrical grid stability can be maintained. The German Weather Service (DWD) currently is involved with two projects concerning research in the field of renewable energy, namely ORKA*) and EWeLiNE**). Whereas the latter is in collaboration with the Fraunhofer Institute (IWES), the project ORKA is led by energy & meteo systems (emsys). Both cooperate with German transmission system operators. The goal of the projects is to improve wind and photovoltaic (PV) power forecasts by combining optimized NWP and enhanced power forecast models. In this context, the German Weather Service aims to improve its model system, including the ensemble forecasting system, by working on data assimilation, model physics and statistical post processing. This presentation is focused on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. First steps leading to improved physical parameterization schemes within the NWP-model are presented. Wind mast measurements reaching up to 200 m height above ground are used for the estimation of the (NWP) wind forecast error at heights relevant for wind energy plants. One particular problem is the daily cycle in wind speed. The transition from stable stratification during
Stephanou, Pavlos S; Mavrantzas, Vlasis G
2014-06-07
We present a hierarchical computational methodology which permits the accurate prediction of the linear viscoelastic properties of entangled polymer melts directly from the chemical structure, chemical composition, and molecular architecture of the constituent chains. The method entails three steps: execution of long molecular dynamics simulations with moderately entangled polymer melts, self-consistent mapping of the accumulated trajectories onto a tube model and parameterization or fine-tuning of the model on the basis of detailed simulation data, and use of the modified tube model to predict the linear viscoelastic properties of significantly higher molecular weight (MW) melts of the same polymer. Predictions are reported for the zero-shear-rate viscosity η0 and the spectra of storage G'(ω) and loss G″(ω) moduli for several mono and bidisperse cis- and trans-1,4 polybutadiene melts as well as for their MW dependence, and are found to be in remarkable agreement with experimentally measured rheological data.
NASA Technical Reports Server (NTRS)
Ellison, Donald; Conway, Bruce; Englander, Jacob
2015-01-01
A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.
Sengupta, Arkajyoti; Raghavachari, Krishnan
2014-10-14
Accurate modeling of the chemical reactions in many diverse areas such as combustion, photochemistry, or atmospheric chemistry strongly depends on the availability of thermochemical information of the radicals involved. However, accurate thermochemical investigations of radical systems using state of the art composite methods have mostly been restricted to the study of hydrocarbon radicals of modest size. In an alternative approach, systematic error-canceling thermochemical hierarchy of reaction schemes can be applied to yield accurate results for such systems. In this work, we have extended our connectivity-based hierarchy (CBH) method to the investigation of radical systems. We have calibrated our method using a test set of 30 medium sized radicals to evaluate their heats of formation. The CBH-rad30 test set contains radicals containing diverse functional groups as well as cyclic systems. We demonstrate that the sophisticated error-canceling isoatomic scheme (CBH-2) with modest levels of theory is adequate to provide heats of formation accurate to ∼1.5 kcal/mol. Finally, we predict heats of formation of 19 other large and medium sized radicals for which the accuracy of available heats of formation are less well-known.
NASA Astrophysics Data System (ADS)
Bozinoski, Radoslav
Significant research has been performed over the last several years on understanding the unsteady aerodynamics of various fluid flows. Much of this work has focused on quantifying the unsteady, three-dimensional flow field effects which have proven vital to the accurate prediction of many fluid and aerodynamic problems. Up until recently, engineers have predominantly relied on steady-state simulations to analyze the inherently three-dimensional ow structures that are prevalent in many of today's "real-world" problems. Increases in computational capacity and the development of efficient numerical methods can change this and allow for the solution of the unsteady Reynolds-Averaged Navier-Stokes (RANS) equations for practical three-dimensional aerodynamic applications. An integral part of this capability has been the performance and accuracy of the turbulence models coupled with advanced parallel computing techniques. This report begins with a brief literature survey of the role fully three-dimensional, unsteady, Navier-Stokes solvers have on the current state of numerical analysis. Next, the process of creating a baseline three-dimensional Multi-Block FLOw procedure called MBFLO3 is presented. Solutions for an inviscid circular arc bump, laminar at plate, laminar cylinder, and turbulent at plate are then presented. Results show good agreement with available experimental, numerical, and theoretical data. Scalability data for the parallel version of MBFLO3 is presented and shows efficiencies of 90% and higher for processes of no less than 100,000 computational grid points. Next, the description and implementation techniques used for several turbulence models are presented. Following the successful implementation of the URANS and DES procedures, the validation data for separated, non-reattaching flows over a NACA 0012 airfoil, wall-mounted hump, and a wing-body junction geometry are presented. Results for the NACA 0012 showed significant improvement in flow predictions
Planar Near-Field Phase Retrieval Using GPUs for Accurate THz Far-Field Prediction
NASA Astrophysics Data System (ADS)
Junkin, Gary
2013-04-01
With a view to using Phase Retrieval to accurately predict Terahertz antenna far-field from near-field intensity measurements, this paper reports on three fundamental advances that achieve very low algorithmic error penalties. The first is a new Gaussian beam analysis that provides accurate initial complex aperture estimates including defocus and astigmatic phase errors, based only on first and second moment calculations. The second is a powerful noise tolerant near-field Phase Retrieval algorithm that combines Anderson's Plane-to-Plane (PTP) with Fienup's Hybrid-Input-Output (HIO) and Successive Over-Relaxation (SOR) to achieve increased accuracy at reduced scan separations. The third advance employs teraflop Graphical Processing Units (GPUs) to achieve practically real time near-field phase retrieval and to obtain the optimum aperture constraint without any a priori information.
NASA Astrophysics Data System (ADS)
Huerta, Eliu; Agarwal, Bhanu; Chua, Alvin; George, Daniel; Haas, Roland; Hinder, Ian; Kumar, Prayush; Moore, Christopher; Pfeiffer, Harald
2017-01-01
We recently constructed an inspiral-merger-ringdown (IMR) waveform model to describe the dynamical evolution of compact binaries on eccentric orbits, and used this model to constrain the eccentricity with which the gravitational wave transients currently detected by LIGO could be effectively recovered with banks of quasi-circular templates. We now present the second generation of this model, which is calibrated using a large catalog of eccentric numerical relativity simulations. We discuss the new features of this model, and show that its enhance accuracy makes it a powerful tool to detect eccentric signals with LIGO.
Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; Pronobis, Wiktor; von Lilienfeld, O Anatole; Müller, Klaus-Robert; Tkatchenko, Alexandre
2015-06-18
Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. In addition, the same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.
Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; ...
2015-06-04
Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstratemore » prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.« less
Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; Pronobis, Wiktor; von Lilienfeld, O. Anatole; Müller, Klaus -Robert; Tkatchenko, Alexandre
2015-06-04
Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.
A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes
Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.; Arkin, Adam P.
2004-12-01
We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, and its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.
Ballester, Pedro J; Schreyer, Adrian; Blundell, Tom L
2014-03-24
Predicting the binding affinities of large sets of diverse molecules against a range of macromolecular targets is an extremely challenging task. The scoring functions that attempt such computational prediction are essential for exploiting and analyzing the outputs of docking, which is in turn an important tool in problems such as structure-based drug design. Classical scoring functions assume a predetermined theory-inspired functional form for the relationship between the variables that describe an experimentally determined or modeled structure of a protein-ligand complex and its binding affinity. The inherent problem of this approach is in the difficulty of explicitly modeling the various contributions of intermolecular interactions to binding affinity. New scoring functions based on machine-learning regression models, which are able to exploit effectively much larger amounts of experimental data and circumvent the need for a predetermined functional form, have already been shown to outperform a broad range of state-of-the-art scoring functions in a widely used benchmark. Here, we investigate the impact of the chemical description of the complex on the predictive power of the resulting scoring function using a systematic battery of numerical experiments. The latter resulted in the most accurate scoring function to date on the benchmark. Strikingly, we also found that a more precise chemical description of the protein-ligand complex does not generally lead to a more accurate prediction of binding affinity. We discuss four factors that may contribute to this result: modeling assumptions, codependence of representation and regression, data restricted to the bound state, and conformational heterogeneity in data.
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.
2006-01-01
Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.
NASA Technical Reports Server (NTRS)
Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris
2011-01-01
A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.
Notas, George; Bariotakis, Michail; Kalogrias, Vaios; Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias
2015-01-01
Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions.
Ihm, Yungok; Cooper, Valentino R; Gallego, Nidia C; Contescu, Cristian I; Morris, James R
2014-01-01
We demonstrate a successful, efficient framework for predicting gas adsorption properties in real materials based on first-principles calculations, with a specific comparison of experiment and theory for methane adsorption in activated carbons. These carbon materials have different pore size distributions, leading to a variety of uptake characteristics. Utilizing these distributions, we accurately predict experimental uptakes and heats of adsorption without empirical potentials or lengthy simulations. We demonstrate that materials with smaller pores have higher heats of adsorption, leading to a higher gas density in these pores. This pore-size dependence must be accounted for, in order to predict and understand the adsorption behavior. The theoretical approach combines: (1) ab initio calculations with a van der Waals density functional to determine adsorbent-adsorbate interactions, and (2) a thermodynamic method that predicts equilibrium adsorption densities by directly incorporating the calculated potential energy surface in a slit pore model. The predicted uptake at P=20 bar and T=298 K is in excellent agreement for all five activated carbon materials used. This approach uses only the pore-size distribution as an input, with no fitting parameters or empirical adsorbent-adsorbate interactions, and thus can be easily applied to other adsorbent-adsorbate combinations.
Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias
2015-01-01
Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions. PMID:25794106
NASA Astrophysics Data System (ADS)
Ben Ali, Jaouher; Chebel-Morello, Brigitte; Saidi, Lotfi; Malinowski, Simon; Fnaiech, Farhat
2015-05-01
Accurate remaining useful life (RUL) prediction of critical assets is an important challenge in condition based maintenance to improve reliability and decrease machine's breakdown and maintenance's cost. Bearing is one of the most important components in industries which need to be monitored and the user should predict its RUL. The challenge of this study is to propose an original feature able to evaluate the health state of bearings and to estimate their RUL by Prognostics and Health Management (PHM) techniques. In this paper, the proposed method is based on the data-driven prognostic approach. The combination of Simplified Fuzzy Adaptive Resonance Theory Map (SFAM) neural network and Weibull distribution (WD) is explored. WD is used just in the training phase to fit measurement and to avoid areas of fluctuation in the time domain. SFAM training process is based on fitted measurements at present and previous inspection time points as input. However, the SFAM testing process is based on real measurements at present and previous inspections. Thanks to the fuzzy learning process, SFAM has an important ability and a good performance to learn nonlinear time series. As output, seven classes are defined; healthy bearing and six states for bearing degradation. In order to find the optimal RUL prediction, a smoothing phase is proposed in this paper. Experimental results show that the proposed method can reliably predict the RUL of rolling element bearings (REBs) based on vibration signals. The proposed prediction approach can be applied to prognostic other various mechanical assets.
SIFTER search: a web server for accurate phylogeny-based protein function prediction
Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.
2015-01-01
We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. The SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded. PMID:25979264
SIFTER search: a web server for accurate phylogeny-based protein function prediction
Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.
2015-05-15
We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.
SIFTER search: a web server for accurate phylogeny-based protein function prediction
Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.
2015-05-15
We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access tomore » precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.« less
Accurate verification of the conserved-vector-current and standard-model predictions
Sirlin, A.; Zucchini, R.
1986-10-20
An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.
The MIDAS touch for Accurately Predicting the Stress-Strain Behavior of Tantalum
Jorgensen, S.
2016-03-02
Testing the behavior of metals in extreme environments is not always feasible, so material scientists use models to try and predict the behavior. To achieve accurate results it is necessary to use the appropriate model and material-specific parameters. This research evaluated the performance of six material models available in the MIDAS database [1] to determine at which temperatures and strain-rates they perform best, and to determine to which experimental data their parameters were optimized. Additionally, parameters were optimized for the Johnson-Cook model using experimental data from Lassila et al [2].
Accurate prediction of human drug toxicity: a major challenge in drug development.
Li, Albert P
2004-11-01
Over the past decades, a number of drugs have been withdrawn or have required special labeling due to adverse effects observed post-marketing. Species differences in drug toxicity in preclinical safety tests and the lack of sensitive biomarkers and nonrepresentative patient population in clinical trials are probable reasons for the failures in predicting human drug toxicity. It is proposed that toxicology should evolve from an empirical practice to an investigative discipline. Accurate prediction of human drug toxicity requires resources and time to be spent in clearly defining key toxic pathways and corresponding risk factors, which hopefully, will be compensated by the benefits of a lower percentage of clinical failure due to toxicity and a decreased frequency of market withdrawal due to unacceptable adverse drug effects.
The use of experimental bending tests to more accurate numerical description of TBC damage process
NASA Astrophysics Data System (ADS)
Sadowski, T.; Golewski, P.
2016-04-01
Thermal barrier coatings (TBCs) have been extensively used in aircraft engines to protect critical engine parts such as blades and combustion chambers, which are exposed to high temperatures and corrosive environment. The blades of turbine engines are additionally exposed to high mechanical loads. These loads are created by the high rotational speed of the rotor (30 000 rot/min), causing the tensile and bending stresses. Therefore, experimental testing of coated samples is necessary in order to determine strength properties of TBCs. Beam samples with dimensions 50×10×2 mm were used in those studies. The TBC system consisted of 150 μm thick bond coat (NiCoCrAlY) and 300 μm thick top coat (YSZ) made by APS (air plasma spray) process. Samples were tested by three-point bending test with various loads. After bending tests, the samples were subjected to microscopic observation to determine the quantity of cracks and their depth. The above mentioned results were used to build numerical model and calibrate material data in Abaqus program. Brittle cracking damage model was applied for the TBC layer, which allows to remove elements after reaching criterion. Surface based cohesive behavior was used to model the delamination which may occur at the boundary between bond coat and top coat.
Predictability and numerical modelling of the North Atlantic Oscillation
NASA Astrophysics Data System (ADS)
Bojariu, Roxana; Gimeno, Luis
2003-10-01
The North Atlantic Oscillation (NAO) is the dominant pattern of atmospheric circulation variability in the extratropical Northern Hemisphere and it is a major controlling factor in basic meteorological variables such as surface wind, temperature and precipitation which have large socioeconomic impacts on energy, agriculture, industry, traffic and human health throughout the whole of Europe and eastern North America. Because of this dominant impact on the weather and climate of the wealthiest areas of the planet, there is a growing interest in quantifying the possible limits of predictability of the phenomenon and the ability of the climate numerical models of simulating it. This paper reviews recent work on predictability and methods of numerical modelling of the North Atlantic Oscillation used to simulate the phenomenon. Atmospheric models with no orography or land-sea contrasts are able to capture the main feature of the NAO; however, to capture any interannual or interdecadal variability of the NAO, atmospheric general circulation models (AGCM) with seasonally varying sea surface temperature (SSTs) forcing are required. Still, no model reproduces the recent observed upward trend in the NAO index, suggesting that either the models are deficient or external forcing such as man-made effects are responsible for this feature. Predictive patterns have been identified in the Atlantic SSTs preceding specific phases of the NAO by up to 6 months, in the atmospheric temperatures anomalies in the previous November, in the Eurasian snow cover and in the sea-ice extent over Arctic. The use of simulations based on ensemble prediction to estimate potential predictability shows the possibility of capturing the upward trend of the NAO and suggests that multiannual to multidecadal variations in the NAO are more predictable than interannual fluctuations.
Accurate prediction of the response of freshwater fish to a mixture of estrogenic chemicals.
Brian, Jayne V; Harris, Catherine A; Scholze, Martin; Backhaus, Thomas; Booy, Petra; Lamoree, Marja; Pojana, Giulio; Jonkers, Niels; Runnalls, Tamsin; Bonfà, Angela; Marcomini, Antonio; Sumpter, John P
2005-06-01
Existing environmental risk assessment procedures are limited in their ability to evaluate the combined effects of chemical mixtures. We investigated the implications of this by analyzing the combined effects of a multicomponent mixture of five estrogenic chemicals using vitellogenin induction in male fathead minnows as an end point. The mixture consisted of estradiol, ethynylestradiol, nonylphenol, octylphenol, and bisphenol A. We determined concentration-response curves for each of the chemicals individually. The chemicals were then combined at equipotent concentrations and the mixture tested using fixed-ratio design. The effects of the mixture were compared with those predicted by the model of concentration addition using biomathematical methods, which revealed that there was no deviation between the observed and predicted effects of the mixture. These findings demonstrate that estrogenic chemicals have the capacity to act together in an additive manner and that their combined effects can be accurately predicted by concentration addition. We also explored the potential for mixture effects at low concentrations by exposing the fish to each chemical at one-fifth of its median effective concentration (EC50). Individually, the chemicals did not induce a significant response, although their combined effects were consistent with the predictions of concentration addition. This demonstrates the potential for estrogenic chemicals to act additively at environmentally relevant concentrations. These findings highlight the potential for existing environmental risk assessment procedures to underestimate the hazard posed by mixtures of chemicals that act via a similar mode of action, thereby leading to erroneous conclusions of absence of risk.
Accurate Prediction of the Response of Freshwater Fish to a Mixture of Estrogenic Chemicals
Brian, Jayne V.; Harris, Catherine A.; Scholze, Martin; Backhaus, Thomas; Booy, Petra; Lamoree, Marja; Pojana, Giulio; Jonkers, Niels; Runnalls, Tamsin; Bonfà, Angela; Marcomini, Antonio; Sumpter, John P.
2005-01-01
Existing environmental risk assessment procedures are limited in their ability to evaluate the combined effects of chemical mixtures. We investigated the implications of this by analyzing the combined effects of a multicomponent mixture of five estrogenic chemicals using vitellogenin induction in male fathead minnows as an end point. The mixture consisted of estradiol, ethynylestradiol, nonylphenol, octylphenol, and bisphenol A. We determined concentration–response curves for each of the chemicals individually. The chemicals were then combined at equipotent concentrations and the mixture tested using fixed-ratio design. The effects of the mixture were compared with those predicted by the model of concentration addition using biomathematical methods, which revealed that there was no deviation between the observed and predicted effects of the mixture. These findings demonstrate that estrogenic chemicals have the capacity to act together in an additive manner and that their combined effects can be accurately predicted by concentration addition. We also explored the potential for mixture effects at low concentrations by exposing the fish to each chemical at one-fifth of its median effective concentration (EC50). Individually, the chemicals did not induce a significant response, although their combined effects were consistent with the predictions of concentration addition. This demonstrates the potential for estrogenic chemicals to act additively at environmentally relevant concentrations. These findings highlight the potential for existing environmental risk assessment procedures to underestimate the hazard posed by mixtures of chemicals that act via a similar mode of action, thereby leading to erroneous conclusions of absence of risk. PMID:15929895
IDSite: An accurate approach to predict P450-mediated drug metabolism
Li, Jianing; Schneebeli, Severin T.; Bylund, Joseph; Farid, Ramy; Friesner, Richard A.
2011-01-01
Accurate prediction of drug metabolism is crucial for drug design. Since a large majority of drugs metabolism involves P450 enzymes, we herein describe a computational approach, IDSite, to predict P450-mediated drug metabolism. To model induced-fit effects, IDSite samples the conformational space with flexible docking in Glide followed by two refinement stages using the Protein Local Optimization Program (PLOP). Sites of metabolism (SOMs) are predicted according to a physical-based score that evaluates the potential of atoms to react with the catalytic iron center. As a preliminary test, we present in this paper the prediction of hydroxylation and O-dealkylation sites mediated by CYP2D6 using two different models: a physical-based simulation model, and a modification of this model in which a small number of parameters are fit to a training set. Without fitting any parameters to experimental data, the Physical IDSite scoring recovers 83% of the experimental observations for 56 compounds with a very low false positive rate. With only 4 fitted parameters, the Fitted IDSite was trained with the subset of 36 compounds and successfully applied to the other 20 compounds, recovering 94% of the experimental observations with high sensitivity and specificity for both sets. PMID:22247702
Schmidt, Florian; Gasparoni, Nina; Gasparoni, Gilles; Gianmoena, Kathrin; Cadenas, Cristina; Polansky, Julia K; Ebert, Peter; Nordström, Karl; Barann, Matthias; Sinha, Anupam; Fröhler, Sebastian; Xiong, Jieyi; Dehghani Amirabad, Azim; Behjati Ardakani, Fatemeh; Hutter, Barbara; Zipprich, Gideon; Felder, Bärbel; Eils, Jürgen; Brors, Benedikt; Chen, Wei; Hengstler, Jan G; Hamann, Alf; Lengauer, Thomas; Rosenstiel, Philip; Walter, Jörn; Schulz, Marcel H
2017-01-09
The binding and contribution of transcription factors (TF) to cell specific gene expression is often deduced from open-chromatin measurements to avoid costly TF ChIP-seq assays. Thus, it is important to develop computational methods for accurate TF binding prediction in open-chromatin regions (OCRs). Here, we report a novel segmentation-based method, TEPIC, to predict TF binding by combining sets of OCRs with position weight matrices. TEPIC can be applied to various open-chromatin data, e.g. DNaseI-seq and NOMe-seq. Additionally, Histone-Marks (HMs) can be used to identify candidate TF binding sites. TEPIC computes TF affinities and uses open-chromatin/HM signal intensity as quantitative measures of TF binding strength. Using machine learning, we find low affinity binding sites to improve our ability to explain gene expression variability compared to the standard presence/absence classification of binding sites. Further, we show that both footprints and peaks capture essential TF binding events and lead to a good prediction performance. In our application, gene-based scores computed by TEPIC with one open-chromatin assay nearly reach the quality of several TF ChIP-seq data sets. Finally, these scores correctly predict known transcriptional regulators as illustrated by the application to novel DNaseI-seq and NOMe-seq data for primary human hepatocytes and CD4+ T-cells, respectively.
Schmidt, Florian; Gasparoni, Nina; Gasparoni, Gilles; Gianmoena, Kathrin; Cadenas, Cristina; Polansky, Julia K.; Ebert, Peter; Nordström, Karl; Barann, Matthias; Sinha, Anupam; Fröhler, Sebastian; Xiong, Jieyi; Dehghani Amirabad, Azim; Behjati Ardakani, Fatemeh; Hutter, Barbara; Zipprich, Gideon; Felder, Bärbel; Eils, Jürgen; Brors, Benedikt; Chen, Wei; Hengstler, Jan G.; Hamann, Alf; Lengauer, Thomas; Rosenstiel, Philip; Walter, Jörn; Schulz, Marcel H.
2017-01-01
The binding and contribution of transcription factors (TF) to cell specific gene expression is often deduced from open-chromatin measurements to avoid costly TF ChIP-seq assays. Thus, it is important to develop computational methods for accurate TF binding prediction in open-chromatin regions (OCRs). Here, we report a novel segmentation-based method, TEPIC, to predict TF binding by combining sets of OCRs with position weight matrices. TEPIC can be applied to various open-chromatin data, e.g. DNaseI-seq and NOMe-seq. Additionally, Histone-Marks (HMs) can be used to identify candidate TF binding sites. TEPIC computes TF affinities and uses open-chromatin/HM signal intensity as quantitative measures of TF binding strength. Using machine learning, we find low affinity binding sites to improve our ability to explain gene expression variability compared to the standard presence/absence classification of binding sites. Further, we show that both footprints and peaks capture essential TF binding events and lead to a good prediction performance. In our application, gene-based scores computed by TEPIC with one open-chromatin assay nearly reach the quality of several TF ChIP-seq data sets. Finally, these scores correctly predict known transcriptional regulators as illustrated by the application to novel DNaseI-seq and NOMe-seq data for primary human hepatocytes and CD4+ T-cells, respectively. PMID:27899623
New efficient optimizing techniques for Kalman filters and numerical weather prediction models
NASA Astrophysics Data System (ADS)
Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis
2016-06-01
The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.
NASA Astrophysics Data System (ADS)
Deep, Prakash; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter
2016-05-01
At advanced technology nodes mask complexity has been increased because of large-scale use of resolution enhancement technologies (RET) which includes Optical Proximity Correction (OPC), Inverse Lithography Technology (ILT) and Source Mask Optimization (SMO). The number of defects detected during inspection of such mask increased drastically and differentiation of critical and non-critical defects are more challenging, complex and time consuming. Because of significant defectivity of EUVL masks and non-availability of actinic inspection, it is important and also challenging to predict the criticality of defects for printability on wafer. This is one of the significant barriers for the adoption of EUVL for semiconductor manufacturing. Techniques to decide criticality of defects from images captured using non actinic inspection images is desired till actinic inspection is not available. High resolution inspection of photomask images detects many defects which are used for process and mask qualification. Repairing all defects is not practical and probably not required, however it's imperative to know which defects are severe enough to impact wafer before repair. Additionally, wafer printability check is always desired after repairing a defect. AIMSTM review is the industry standard for this, however doing AIMSTM review for all defects is expensive and very time consuming. Fast, accurate and an economical mechanism is desired which can predict defect printability on wafer accurately and quickly from images captured using high resolution inspection machine. Predicting defect printability from such images is challenging due to the fact that the high resolution images do not correlate with actual mask contours. The challenge is increased due to use of different optical condition during inspection other than actual scanner condition, and defects found in such images do not have correlation with actual impact on wafer. Our automated defect simulation tool predicts
Theoretical and numerical predictions of hypervelocity impact-generated plasma
NASA Astrophysics Data System (ADS)
Li, Jianqiao; Song, Weidong; Ning, Jianguo
2014-08-01
The hypervelocity impact generated plasmas (HVIGP) in thermodynamic non-equilibrium state were theoretically analyzed, and a physical model was presented to explore the relationship between plasma ionization degree and internal energy of the system by a group of equations including a chemical reaction equilibrium equation, a chemical reaction rate equation, and an energy conservation equation. A series of AUTODYN 3D (a widely used software in dynamic numerical simulations and developed by Century Dynamic Inc.) numerical simulations of the impacts of hypervelocity Al projectile on its targets at different incident angles were performed. The internal energy and the material density obtained from the numerical simulations were then used to calculate the ionization degree and the electron temperature. Based on a self-developed 2D smooth particle hydrodynamic (SPH) code and the theoretical model, the plasmas generated by 6 hypervelocity impacts were directly simulated and their total charges were calculated. The numerical results are in good agreements with the experimental results as well as the empirical formulas, demonstrating that the theoretical model is justified by the AUTODYN 3D and self-developed 2D SPH simulations and applicable to predict HVIGPs. The study is of significance for astrophysical and cosmonautic researches and safety.
Theoretical and numerical predictions of hypervelocity impact-generated plasma
Li, Jianqiao; Song, Weidong Ning, Jianguo
2014-08-15
The hypervelocity impact generated plasmas (HVIGP) in thermodynamic non-equilibrium state were theoretically analyzed, and a physical model was presented to explore the relationship between plasma ionization degree and internal energy of the system by a group of equations including a chemical reaction equilibrium equation, a chemical reaction rate equation, and an energy conservation equation. A series of AUTODYN 3D (a widely used software in dynamic numerical simulations and developed by Century Dynamic Inc.) numerical simulations of the impacts of hypervelocity Al projectile on its targets at different incident angles were performed. The internal energy and the material density obtained from the numerical simulations were then used to calculate the ionization degree and the electron temperature. Based on a self-developed 2D smooth particle hydrodynamic (SPH) code and the theoretical model, the plasmas generated by 6 hypervelocity impacts were directly simulated and their total charges were calculated. The numerical results are in good agreements with the experimental results as well as the empirical formulas, demonstrating that the theoretical model is justified by the AUTODYN 3D and self-developed 2D SPH simulations and applicable to predict HVIGPs. The study is of significance for astrophysical and cosmonautic researches and safety.
Numerical predictions of burner performance during pulverized coal combustion
Zarnescu, V.; Pisupati, S.V.
1999-07-01
The performance of four burners in terms of temperature and velocity profiles, residence time and NO{sub x} emissions was predicted using numerical simulations and a two-dimensional model for pulverized coal combustion. Numerical predictions for two burners used in a pilot-scale 0.5 MM Btu/hr (146.5 kW) down-fired combustor (DFC) are presented. Two other burner configurations were evaluated and compared with the ones used with the DFC for attaining lower NO{sub x} levels. Simulations were conducted for both coal and coal-water slurry as primary fuels. A sensitivity analysis of predictions with respect to variations of the model parameters was performed. The results suggest that the higher NO{sub x} reduction with one of the burners used in the DFC is due to the improved near-burner aerodynamics and to better flame attachment. These improved conditions are influenced by a combination of geometric and flow parameters, such as burner dimensions, quart diameter, inlet velocity, inlet temperature and swirl number.
Impact of Quikscat Data on Numerical Weather Prediction
NASA Technical Reports Server (NTRS)
Atlas, Robert; Ardizzone, J.; Bloom, S.; Brin, G.; Bungato, D.; Jusem, J. C.; Terry, J.; Yu, T.-W.
2001-01-01
Scatterometer observations of the ocean surface wind speed and direction improve the depiction and prediction of storms at sea. These data are especially valuable where observations are otherwise sparse ---mostly in the Southern Hemisphere and tropics, but also on occasion in the North Atlantic and North Pacific. The SeaWinds scatterometer on the QuikScat satellite was launched in July 1999 and it represents a dramatic departure in design from the other scatterometer instruments launched during the past decade (ERS-1,2 and NSCAT). The NASA Data Assimilation Office (DAO) was the first data assimilation center to assimilate QuikScat SeaWinds data and evaluate their impact on numerical weather prediction. Several data impact experiments have been performed, using systems from both the DAO (GEOS-3) and from NCEP (GDAS). In general, these experiments have shown a modest impact of SeaWinds data on numerical weather prediction, the magnitude of which appears to be comparable to the magnitude of the impact of AMI scatterometer data from the ERS satellites. Some of the main results from these experiments will be presented at the meeting.
Stempler, Shiri; Waldman, Yedael Y; Wolf, Lior; Ruppin, Eytan
2012-09-01
Numerous metabolic alterations are associated with the impairment of brain cells in Alzheimer's disease (AD). Here we use gene expression microarrays of both whole hippocampus tissue and hippocampal neurons of AD patients to investigate the ability of metabolic gene expression to predict AD progression and its cognitive decline. We find that the prediction accuracy of different AD stages is markedly higher when using neuronal expression data (0.9) than when using whole tissue expression (0.76). Furthermore, the metabolic genes' expression is shown to be as effective in predicting AD severity as the entire gene list. Remarkably, a regression model from hippocampal metabolic gene expression leads to a marked correlation of 0.57 with the Mini-Mental State Examination cognitive score. Notably, the expression of top predictive neuronal genes in AD is significantly higher than that of other metabolic genes in the brains of healthy subjects. All together, the analyses point to a subset of metabolic genes that is strongly associated with normal brain functioning and whose disruption plays a major role in AD.
Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model
Li, Zhen; Zhang, Renyu
2017-01-01
Motivation Protein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. Method This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. The first residual network conducts a series of 1-dimensional convolutional transformation of sequential features; the second residual network conducts a series of 2-dimensional convolutional transformation of pairwise information including output of the first residual network, EC information and pairwise potential. By using very deep residual networks, we can accurately model contact occurrence patterns and complex sequence-structure relationship and thus, obtain higher-quality contact prediction regardless of how many sequence homologs are available for proteins in question. Results Our method greatly outperforms existing methods and leads to much more accurate contact-assisted folding. Tested on 105 CASP11 targets, 76 past CAMEO hard targets, and 398 membrane proteins, the average top L long-range prediction accuracy obtained by our method, one representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints but without any force fields can yield correct folds (i.e., TMscore>0.6) for 203 of the 579 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them, respectively. Our contact
Numerical prediction of low frequency combustion instability in a model ramjet combustor
Shang, H.M.; Chen, Y.S.; Shih, M.S.; Farmer, R.C.
1996-12-31
A numerical analysis has been conducted for low-frequency combustion instability in a model ramjet combustor. The facility is two-dimensional, and is comprised of a long inlet duct, a dump combustor chamber, and an exhaust nozzle. The experiments observed that the combustor pressure oscillation under the particular operating condition did not have much cycle-to-cycle variation. The main resonant frequency occurs at about 65 Hz for this case. In the numerical analysis, a time accurate Computational Fluid Dynamics (CFD) code with a pressure-correction algorithm is used, and the combustion process was modeled with a single step chemistry model and a modified eddy breakup model. A high-order upwind scheme with flux limiter is used for convection terms. The convergence of the linear algebraic equations is accelerated through a preconditioned conjugate gradient matrix solver. The numerical predictions show that the flame oscillates in the combustion chamber at the calculation condition and are justified by the experimental schlieren photographs. The numerical analyses correctly predict the chamber pressure oscillation frequency is over-predicted compared with the experimental data. The discrepancy can be explained by the simplified turbulence and combustion model used in this study, and the uncertainty of the inlet boundary conditions.
Evaluating the Impact of Aerosols on Numerical Weather Prediction
NASA Astrophysics Data System (ADS)
Freitas, Saulo; Silva, Arlindo; Benedetti, Angela; Grell, Georg; Members, Wgne; Zarzur, Mauricio
2015-04-01
The Working Group on Numerical Experimentation (WMO, http://www.wmo.int/pages/about/sec/rescrosscut/resdept_wgne.html) has organized an exercise to evaluate the impact of aerosols on NWP. This exercise will involve regional and global models currently used for weather forecast by the operational centers worldwide and aims at addressing the following questions: a) How important are aerosols for predicting the physical system (NWP, seasonal, climate) as distinct from predicting the aerosols themselves? b) How important is atmospheric model quality for air quality forecasting? c) What are the current capabilities of NWP models to simulate aerosol impacts on weather prediction? Toward this goal we have selected 3 strong or persistent events of aerosol pollution worldwide that could be fairly represented in current NWP models and that allowed for an evaluation of the aerosol impact on weather prediction. The selected events includes a strong dust storm that blew off the coast of Libya and over the Mediterranean, an extremely severe episode of air pollution in Beijing and surrounding areas, and an extreme case of biomass burning smoke in Brazil. The experimental design calls for simulations with and without explicitly accounting for aerosol feedbacks in the cloud and radiation parameterizations. In this presentation we will summarize the results of this study focusing on the evaluation of model performance in terms of its ability to faithfully simulate aerosol optical depth, and the assessment of the aerosol impact on the predictions of near surface wind, temperature, humidity, rainfall and the surface energy budget.
Intermolecular potentials and the accurate prediction of the thermodynamic properties of water
NASA Astrophysics Data System (ADS)
Shvab, I.; Sadus, Richard J.
2013-11-01
The ability of intermolecular potentials to correctly predict the thermodynamic properties of liquid water at a density of 0.998 g/cm3 for a wide range of temperatures (298-650 K) and pressures (0.1-700 MPa) is investigated. Molecular dynamics simulations are reported for the pressure, thermal pressure coefficient, thermal expansion coefficient, isothermal and adiabatic compressibilities, isobaric and isochoric heat capacities, and Joule-Thomson coefficient of liquid water using the non-polarizable SPC/E and TIP4P/2005 potentials. The results are compared with both experiment data and results obtained from the ab initio-based Matsuoka-Clementi-Yoshimine non-additive (MCYna) [J. Li, Z. Zhou, and R. J. Sadus, J. Chem. Phys. 127, 154509 (2007)] potential, which includes polarization contributions. The data clearly indicate that both the SPC/E and TIP4P/2005 potentials are only in qualitative agreement with experiment, whereas the polarizable MCYna potential predicts some properties within experimental uncertainty. This highlights the importance of polarizability for the accurate prediction of the thermodynamic properties of water, particularly at temperatures beyond 298 K.
Intermolecular potentials and the accurate prediction of the thermodynamic properties of water.
Shvab, I; Sadus, Richard J
2013-11-21
The ability of intermolecular potentials to correctly predict the thermodynamic properties of liquid water at a density of 0.998 g∕cm(3) for a wide range of temperatures (298-650 K) and pressures (0.1-700 MPa) is investigated. Molecular dynamics simulations are reported for the pressure, thermal pressure coefficient, thermal expansion coefficient, isothermal and adiabatic compressibilities, isobaric and isochoric heat capacities, and Joule-Thomson coefficient of liquid water using the non-polarizable SPC∕E and TIP4P∕2005 potentials. The results are compared with both experiment data and results obtained from the ab initio-based Matsuoka-Clementi-Yoshimine non-additive (MCYna) [J. Li, Z. Zhou, and R. J. Sadus, J. Chem. Phys. 127, 154509 (2007)] potential, which includes polarization contributions. The data clearly indicate that both the SPC∕E and TIP4P∕2005 potentials are only in qualitative agreement with experiment, whereas the polarizable MCYna potential predicts some properties within experimental uncertainty. This highlights the importance of polarizability for the accurate prediction of the thermodynamic properties of water, particularly at temperatures beyond 298 K.
Intermolecular potentials and the accurate prediction of the thermodynamic properties of water
Shvab, I.; Sadus, Richard J.
2013-11-21
The ability of intermolecular potentials to correctly predict the thermodynamic properties of liquid water at a density of 0.998 g/cm{sup 3} for a wide range of temperatures (298–650 K) and pressures (0.1–700 MPa) is investigated. Molecular dynamics simulations are reported for the pressure, thermal pressure coefficient, thermal expansion coefficient, isothermal and adiabatic compressibilities, isobaric and isochoric heat capacities, and Joule-Thomson coefficient of liquid water using the non-polarizable SPC/E and TIP4P/2005 potentials. The results are compared with both experiment data and results obtained from the ab initio-based Matsuoka-Clementi-Yoshimine non-additive (MCYna) [J. Li, Z. Zhou, and R. J. Sadus, J. Chem. Phys. 127, 154509 (2007)] potential, which includes polarization contributions. The data clearly indicate that both the SPC/E and TIP4P/2005 potentials are only in qualitative agreement with experiment, whereas the polarizable MCYna potential predicts some properties within experimental uncertainty. This highlights the importance of polarizability for the accurate prediction of the thermodynamic properties of water, particularly at temperatures beyond 298 K.
NASA Astrophysics Data System (ADS)
Zacharias, Panagiotis P.; Chatzineofytou, Elpida G.; Spantideas, Sotirios T.; Capsalis, Christos N.
2016-07-01
In the present work, the determination of the magnetic behavior of localized magnetic sources from near-field measurements is examined. The distance power law of the magnetic field fall-off is used in various cases to accurately predict the magnetic signature of an equipment under test (EUT) consisting of multiple alternating current (AC) magnetic sources. Therefore, parameters concerning the location of the observation points (magnetometers) are studied towards this scope. The results clearly show that these parameters are independent of the EUT's size and layout. Additionally, the techniques developed in the present study enable the placing of the magnetometers close to the EUT, thus achieving high signal-to-noise ratio (SNR). Finally, the proposed method is verified by real measurements, using a mobile phone as an EUT.
A fast and accurate method to predict 2D and 3D aerodynamic boundary layer flows
NASA Astrophysics Data System (ADS)
Bijleveld, H. A.; Veldman, A. E. P.
2014-12-01
A quasi-simultaneous interaction method is applied to predict 2D and 3D aerodynamic flows. This method is suitable for offshore wind turbine design software as it is a very accurate and computationally reasonably cheap method. This study shows the results for a NACA 0012 airfoil. The two applied solvers converge to the experimental values when the grid is refined. We also show that in separation the eigenvalues remain positive thus avoiding the Goldstein singularity at separation. In 3D we show a flow over a dent in which separation occurs. A rotating flat plat is used to show the applicability of the method for rotating flows. The shown capabilities of the method indicate that the quasi-simultaneous interaction method is suitable for design methods for offshore wind turbine blades.
Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R
2017-02-14
Accurate energy ranking is a key facet to the problem of first-principles crystal-structure prediction (CSP) of molecular crystals. This work presents a systematic assessment of B86bPBE-XDM, a semilocal density functional combined with the exchange-hole dipole moment (XDM) dispersion model, for energy ranking using 14 compounds from the first five CSP blind tests. Specifically, the set of crystals studied comprises 11 rigid, planar compounds and 3 co-crystals. The experimental structure was correctly identified as the lowest in lattice energy for 12 of the 14 total crystals. One of the exceptions is 4-hydroxythiophene-2-carbonitrile, for which the experimental structure was correctly identified once a quasi-harmonic estimate of the vibrational free-energy contribution was included, evidencing the occasional importance of thermal corrections for accurate energy ranking. The other exception is an organic salt, where charge-transfer error (also called delocalization error) is expected to cause the base density functional to be unreliable. Provided the choice of base density functional is appropriate and an estimate of temperature effects is used, XDM-corrected density-functional theory is highly reliable for the energetic ranking of competing crystal structures.
Measuring solar reflectance Part I: Defining a metric that accurately predicts solar heat gain
Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul
2010-05-14
Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective 'cool colored' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland U.S. latitudes, this metric RE891BN can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {le} 5:12 [23{sup o}]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool-roof net energy savings by as much as 23%. We define clear-sky air mass one global horizontal ('AM1GH') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer.
Point-of-care cardiac troponin test accurately predicts heat stroke severity in rats.
Audet, Gerald N; Quinn, Carrie M; Leon, Lisa R
2015-11-15
Heat stroke (HS) remains a significant public health concern. Despite the substantial threat posed by HS, there is still no field or clinical test of HS severity. We suggested previously that circulating cardiac troponin (cTnI) could serve as a robust biomarker of HS severity after heating. In the present study, we hypothesized that (cTnI) point-of-care test (ctPOC) could be used to predict severity and organ damage at the onset of HS. Conscious male Fischer 344 rats (n = 16) continuously monitored for heart rate (HR), blood pressure (BP), and core temperature (Tc) (radiotelemetry) were heated to maximum Tc (Tc,Max) of 41.9 ± 0.1°C and recovered undisturbed for 24 h at an ambient temperature of 20°C. Blood samples were taken at Tc,Max and 24 h after heat via submandibular bleed and analyzed on ctPOC test. POC cTnI band intensity was ranked using a simple four-point scale via two blinded observers and compared with cTnI levels measured by a clinical blood analyzer. Blood was also analyzed for biomarkers of systemic organ damage. HS severity, as previously defined using HR, BP, and recovery Tc profile during heat exposure, correlated strongly with cTnI (R(2) = 0.69) at Tc,Max. POC cTnI band intensity ranking accurately predicted cTnI levels (R(2) = 0.64) and HS severity (R(2) = 0.83). Five markers of systemic organ damage also correlated with ctPOC score (albumin, alanine aminotransferase, blood urea nitrogen, cholesterol, and total bilirubin; R(2) > 0.4). This suggests that cTnI POC tests can accurately determine HS severity and could serve as simple, portable, cost-effective HS field tests.
Numerical prediction of microstructure and hardness in multicycle simulations
Oddy, A.S.; McDill, J.M.J.
1996-06-01
Thermal-microstructural predictions are made and compared to physical simulations of heat-affected zones in multipass and weaved welds. The microstructural prediction algorithm includes reaustenitization kinetics, grain growth, austenite decomposition kinetics, hardness, and tempering. Microstructural simulation of weaved welds requires that the algorithm include transient reaustenitization, austenite decomposition for arbitrary thermal cycles including during reheating, and tempering. Material properties for each of these phenomena are taken from the best available literature. The numerical predictions are compared with the results of physical simulations made at the Metals Technology Laboratory, CANMET, on a Gleeble 1500 simulator. Thermal histories used in the physical simulations included single-pass welds, isothermal tempering, two-cycle, and three-cycle welds. The two- and three-cycle welds include temper-bead and weaved-weld simulations. A recurring theme in the analysis is the significant variation found in the material properties for the same grade of steel. This affected all the material properties used including those governing reaustenitization, austenite grain growth, austenite decomposition, and hardness. Hardness measurements taken from the literature show a variation of {+-}5 to 30 HV on the same sample. Alloy differences within the allowable range also led to hardness variations of {+-}30 HV for the heat-affected zone of multipass welds. The predicted hardnesses agree extremely well with those taken from the physical simulations.
Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru
2016-01-01
Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research. PMID:27571061
Numerical prediction of microstructure and hardness in multicycle simulations
NASA Astrophysics Data System (ADS)
Oddy, A. S.; McDill, J. M. J.
1996-06-01
Thermal-microstructural predictions are made and compared to physical simulations of heat-affected zones in multipass and weaved welds. The microstructural prediction algorithm includes reaustenitization kinetics, grain growth, austenite decomposition kinetics, hardness, and tempering. Microstructural simulation of weaved welds requires that the algorithm include transient reaustenitization, austenite decomposition for arbitrary thermal cycles including during reheating, and tempering. Material properties for each of these phenomena are taken from the best available literature. The numerical predictions are compared with the results of physical simulations made at the Metals Technology Laboratory, CANMET, on a Gleeble 1500 simulator. Thermal histories used in the physical simulations included single-pass welds, isothermal tempering, two-cycle, and three-cycle welds. The two-and three-cycle welds include temper-bead and weaved-weld simulations. A recurring theme in the analysis is the significant variation found in the material properties for the same grade of steel. This affected all the material properties used including those governing reaustenitization, austenite grain growth, austenite decomposition, and hardness. Hardness measurements taken from the literature show a variation of ±5 to 30 HV on the same sample. Alloy differences within the allowable range also led to hardness variations of ±30 HV for the heat-affected zone of multipass welds. The predicted hardnesses agree extremely well with those taken from the physical simulations. Some differences due to problems with the austenite decomposition properties were noted in that bainite formation was predicted to occur somewhat more rapidly than was found experimentally. Reaustenitization values predicted during the rapid excursions to intercritical temperatures were also in good qualitative agreement with those measured experimentally.
NASA Astrophysics Data System (ADS)
An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.
2017-01-01
The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.
Prediction of cavitating flow noise by direct numerical simulation
NASA Astrophysics Data System (ADS)
Seo, Jung H.; Moon, Young J.; Shin, Byeong Rog
2008-06-01
In this study, a direct numerical simulation procedure for the cavitating flow noise is presented. The compressible Navier-Stokes equations are written for the two-phase fluid, employing a density-based homogeneous equilibrium model with a linearly-combined equation of state. To resolve the linear and non-linear waves in the cavitating flow, a sixth-order compact central scheme is utilized with the selective spatial filtering technique. The present cavitation model and numerical methods are validated for two benchmark problems: linear wave convection and acoustic saturation in a bubbly flow. The cavitating flow noise is then computed for a 2D circular cylinder flow at Reynolds number based on a cylinder diameter, 200 and cavitation numbers, σ=0.7-2. It is observed that, at cavitation numbers σ=1 and 0.7, the cavitating flow and noise characteristics are significantly changed by the shock waves due to the coherent collapse of the cloud cavitation in the wake. To verify the present direct simulation and further analyze the sources of cavitation noise, an acoustic analogy based on a classical theory of Fitzpatrik and Strasberg is derived. The far-field noise predicted by direct simulation is well compared with that of acoustic analogy, and it also confirms the f-2 decaying rate in the spectrum, as predicted by the model of Fitzpatrik and Strasberg with the Rayleigh-Plesset equation.
Impact of Quikscat Data on Numerical Weather Prediction
NASA Technical Reports Server (NTRS)
Atlas, Robert
2002-01-01
One of the important applications of satellite surface wind observations is to increase the accuracy of weather analyses and forecasts. Satellite surface wind data can improve numerical weather prediction (NWP) model forecasts by contributing to improved analyses of the surface wind field and air sea fluxes. Through the data assimilation process,these data can also improve atmospheric mass and motion fields in the free atmosphere above the surface. The SeaWinds scatterometer on the QuikScat satellite was launched in July 1999 and represented a dramatic departure in design from the other scatterometer instruments launched during the past decade (ERS-1,2 and NSCAT). The NASA Data Assimilation Office (DAO) was the first data assimilation center to assimilate QuikScat Seawinds data and evaluate their impact on numerical weather prediction. Following the launch of QuikScat, a detailed evaluation of the initial surface wind data sets was performed as part of a collaborative project between the Environmental Modeling Center of NCEP, NESDIS and the DAO. More recently, the impact of Quikscat data was evaluated in detailed experiments using the NCEP operational data assimilation system. As a result of the beneficial impact obtained, NCEP began operational utilization of Quikscat data. Results from these experiments as well as recent DAO assimilation experiments showing the impact of Quikscat data on stratospheric analyses and forecasts will be presented at the meeting.
Parameterization of mires in a numerical weather prediction model
NASA Astrophysics Data System (ADS)
Yurova, Alla; Tolstykh, Mikhail; Nilsson, Mats; Sirin, Andrey
2014-11-01
Mires (peat-accumulating wetlands) occupy 8.1% of Russian territory and are especially numerous in the western Siberian Lowlands, where they can significantly modify atmospheric heat and water balances. They also influence air temperatures and humidity in the boundary layers closest to the earth's surface. The purpose of our study was to incorporate the influence of mires into the SL-AV numerical weather prediction model, which is used operationally in the Hydrometeorological Center of Russia. This was done by adjusting the multilayer soil component (by modifying the peat thermal conductivity in the heat diffusion equation and reformulating the lower boundary condition for Richard's equation), and reformulating both the evapotranspiration and runoff from mires. When evaporation from mires was incorporated into the SL-AV model, the latent heat flux in the areas dominated by mires increased strongly, resulting in surface cooling and hence reductions in the sensible heat flux and outgoing terrestrial long-wave radiation. Presented results show that including mires significantly decreased the bias and RMSE of predictions of temperature and relative humidity 2 m above the ground for lead times of 12, 36, and 60 h from 00 h Coordinated Universal Time (evening conditions), but did not eliminate the bias in forecasts for lead times of 24, 48, and 72 h (morning conditions) in Siberia. Different parameterizations of mire evapotranspiration are also compared.
Kawai, Soshi; Terashima, Hiroshi; Negishi, Hideyo
2015-11-01
This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture the steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier–Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.
NASA Astrophysics Data System (ADS)
Stecca, Guglielmo; Siviglia, Annunziato; Blom, Astrid
2016-07-01
We present an accurate numerical approximation to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in one space dimension. Our solution procedure originates from the fully-unsteady matrix-vector formulation developed in [54]. The principal part of the problem is solved by an explicit Finite Volume upwind method of the path-conservative type, by which all the variables are updated simultaneously in a coupled fashion. The solution to the principal part is embedded into a splitting procedure for the treatment of frictional source terms. The numerical scheme is extended to second-order accuracy and includes a bookkeeping procedure for handling the evolution of size stratification in the substrate. We develop a concept of balancedness for the vertical mass flux between the substrate and active layer under bed degradation, which prevents the occurrence of non-physical oscillations in the grainsize distribution of the substrate. We suitably modify the numerical scheme to respect this principle. We finally verify the accuracy in our solution to the equations, and its ability to reproduce one-dimensional morphodynamics due to streamwise and vertical sorting, using three test cases. In detail, (i) we empirically assess the balancedness of vertical mass fluxes under degradation; (ii) we study the convergence to the analytical linearised solution for the propagation of infinitesimal-amplitude waves [54], which is here employed for the first time to assess a mixed-sediment model; (iii) we reproduce Ribberink's E8-E9 flume experiment [46].
Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer
2017-04-01
Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.
Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish
2016-04-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy.
ChIP-seq Accurately Predicts Tissue-Specific Activity of Enhancers
Visel, Axel; Blow, Matthew J.; Li, Zirong; Zhang, Tao; Akiyama, Jennifer A.; Holt, Amy; Plajzer-Frick, Ingrid; Shoukry, Malak; Wright, Crystal; Chen, Feng; Afzal, Veena; Ren, Bing; Rubin, Edward M.; Pennacchio, Len A.
2009-02-01
A major yet unresolved quest in decoding the human genome is the identification of the regulatory sequences that control the spatial and temporal expression of genes. Distant-acting transcriptional enhancers are particularly challenging to uncover since they are scattered amongst the vast non-coding portion of the genome. Evolutionary sequence constraint can facilitate the discovery of enhancers, but fails to predict when and where they are active in vivo. Here, we performed chromatin immunoprecipitation with the enhancer-associated protein p300, followed by massively-parallel sequencing, to map several thousand in vivo binding sites of p300 in mouse embryonic forebrain, midbrain, and limb tissue. We tested 86 of these sequences in a transgenic mouse assay, which in nearly all cases revealed reproducible enhancer activity in those tissues predicted by p300 binding. Our results indicate that in vivo mapping of p300 binding is a highly accurate means for identifying enhancers and their associated activities and suggest that such datasets will be useful to study the role of tissue-specific enhancers in human biology and disease on a genome-wide scale.
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina
Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish
2016-01-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143
Fast and accurate pressure-drop prediction in straightened atherosclerotic coronary arteries.
Schrauwen, Jelle T C; Koeze, Dion J; Wentzel, Jolanda J; van de Vosse, Frans N; van der Steen, Anton F W; Gijsen, Frank J H
2015-01-01
Atherosclerotic disease progression in coronary arteries is influenced by wall shear stress. To compute patient-specific wall shear stress, computational fluid dynamics (CFD) is required. In this study we propose a method for computing the pressure-drop in regions proximal and distal to a plaque, which can serve as a boundary condition in CFD. As a first step towards exploring the proposed method we investigated ten straightened coronary arteries. First, the flow fields were calculated with CFD and velocity profiles were fitted on the results. Second, the Navier-Stokes equation was simplified and solved with the found velocity profiles to obtain a pressure-drop estimate (Δp (1)). Next, Δp (1) was compared to the pressure-drop from CFD (Δp CFD) as a validation step. Finally, the velocity profiles, and thus the pressure-drop were predicted based on geometry and flow, resulting in Δp geom. We found that Δp (1) adequately estimated Δp CFD with velocity profiles that have one free parameter β. This β was successfully related to geometry and flow, resulting in an excellent agreement between Δp CFD and Δp geom: 3.9 ± 4.9% difference at Re = 150. We showed that this method can quickly and accurately predict pressure-drop on the basis of geometry and flow in straightened coronary arteries that are mildly diseased.
Accurate load prediction by BEM with airfoil data from 3D RANS simulations
NASA Astrophysics Data System (ADS)
Schneider, Marc S.; Nitzsche, Jens; Hennings, Holger
2016-09-01
In this paper, two methods for the extraction of airfoil coefficients from 3D CFD simulations of a wind turbine rotor are investigated, and these coefficients are used to improve the load prediction of a BEM code. The coefficients are extracted from a number of steady RANS simulations, using either averaging of velocities in annular sections, or an inverse BEM approach for determination of the induction factors in the rotor plane. It is shown that these 3D rotor polars are able to capture the rotational augmentation at the inner part of the blade as well as the load reduction by 3D effects close to the blade tip. They are used as input to a simple BEM code and the results of this BEM with 3D rotor polars are compared to the predictions of BEM with 2D airfoil coefficients plus common empirical corrections for stall delay and tip loss. While BEM with 2D airfoil coefficients produces a very different radial distribution of loads than the RANS simulation, the BEM with 3D rotor polars manages to reproduce the loads from RANS very accurately for a variety of load cases, as long as the blade pitch angle is not too different from the cases from which the polars were extracted.
An operational phenological model for numerical pollen prediction
NASA Astrophysics Data System (ADS)
Scheifinger, Helfried
2010-05-01
The general prevalence of seasonal allergic rhinitis is estimated to be about 15% in Europe, and still increasing. Pre-emptive measures require both the reliable assessment of production and release of various pollen species and the forecasting of their atmospheric dispersion. For this purpose numerical pollen prediction schemes are being developed by a number of European weather services in order to supplement and improve the qualitative pollen prediction systems by state of the art instruments. Pollen emission is spatially and temporally highly variable throughout the vegetation period and not directly observed, which precludes a straightforward application of dispersion models to simulate pollen transport. Even the beginning and end of flowering, which indicates the time period of potential pollen emission, is not (yet) available in real time. One way to create a proxy for the beginning, the course and the end of the pollen emission is its simulation as function of real time temperature observations. In this work the European phenological data set of the COST725 initiative forms the basis of modelling the beginning of flowering of 15 species, some of which emit allergic pollen. In order to keep the problem as simple as possible for the sake of spatial interpolation, a 3 parameter temperature sum model was implemented in a real time operational procedure, which calculates the spatial distribution of the entry dates for the current day and 24, 48 and 72 hours in advance. As stand alone phenological model and combined with back trajectories it is thought to support the qualitative pollen prediction scheme at the Austrian national weather service. Apart from that it is planned to incorporate it in a numerical pollen dispersion model. More details, open questions and first results of the operation phenological model will be discussed and presented.
Accurate and Robust Genomic Prediction of Celiac Disease Using Statistical Learning
Abraham, Gad; Tye-Din, Jason A.; Bhalala, Oneil G.; Kowalczyk, Adam; Zobel, Justin; Inouye, Michael
2014-01-01
Practical application of genomic-based risk stratification to clinical diagnosis is appealing yet performance varies widely depending on the disease and genomic risk score (GRS) method. Celiac disease (CD), a common immune-mediated illness, is strongly genetically determined and requires specific HLA haplotypes. HLA testing can exclude diagnosis but has low specificity, providing little information suitable for clinical risk stratification. Using six European cohorts, we provide a proof-of-concept that statistical learning approaches which simultaneously model all SNPs can generate robust and highly accurate predictive models of CD based on genome-wide SNP profiles. The high predictive capacity replicated both in cross-validation within each cohort (AUC of 0.87–0.89) and in independent replication across cohorts (AUC of 0.86–0.9), despite differences in ethnicity. The models explained 30–35% of disease variance and up to ∼43% of heritability. The GRS's utility was assessed in different clinically relevant settings. Comparable to HLA typing, the GRS can be used to identify individuals without CD with ≥99.6% negative predictive value however, unlike HLA typing, fine-scale stratification of individuals into categories of higher-risk for CD can identify those that would benefit from more invasive and costly definitive testing. The GRS is flexible and its performance can be adapted to the clinical situation by adjusting the threshold cut-off. Despite explaining a minority of disease heritability, our findings indicate a genomic risk score provides clinically relevant information to improve upon current diagnostic pathways for CD and support further studies evaluating the clinical utility of this approach in CD and other complex diseases. PMID:24550740
Numerical predictions of EML (electromagnetic launcher) system performance
Schnurr, N.M.; Kerrisk, J.F.; Davidson, R.F.
1987-01-01
The performance of an electromagnetic launcher (EML) depends on a large number of parameters, including the characteristics of the power supply, rail geometry, rail and insulator material properties, injection velocity, and projectile mass. EML system performance is frequently limited by structural or thermal effects in the launcher (railgun). A series of computer codes has been developed at the Los Alamos National Laboratory to predict EML system performance and to determine the structural and thermal constraints on barrel design. These codes include FLD, a two-dimensional electrostatic code used to calculate the high-frequency inductance gradient and surface current density distribution for the rails; TOPAZRG, a two-dimensional finite-element code that simultaneously analyzes thermal and electromagnetic diffusion in the rails; and LARGE, a code that predicts the performance of the entire EML system. Trhe NIKE2D code, developed at the Lawrence Livermore National Laboratory, is used to perform structural analyses of the rails. These codes have been instrumental in the design of the Lethality Test System (LTS) at Los Alamos, which has an ultimate goal of accelerating a 30-g projectile to a velocity of 15 km/s. The capabilities of the individual codes and the coupling of these codes to perform a comprehensive analysis is discussed in relation to the LTS design. Numerical predictions are compared with experimental data and presented for the LTS prototype tests.
Wong, Sharon; Back, Michael; Tan, Poh Wee; Lee, Khai Mun; Baggarley, Shaun; Lu, Jaide Jay
2012-07-01
Skin doses have been an important factor in the dose prescription for breast radiotherapy. Recent advances in radiotherapy treatment techniques, such as intensity-modulated radiation therapy (IMRT) and new treatment schemes such as hypofractionated breast therapy have made the precise determination of the surface dose necessary. Detailed information of the dose at various depths of the skin is also critical in designing new treatment strategies. The purpose of this work was to assess the accuracy of surface dose calculation by a clinically used treatment planning system and those measured by thermoluminescence dosimeters (TLDs) in a customized chest wall phantom. This study involved the construction of a chest wall phantom for skin dose assessment. Seven TLDs were distributed throughout each right chest wall phantom to give adequate representation of measured radiation doses. Point doses from the CMS Xio Registered-Sign treatment planning system (TPS) were calculated for each relevant TLD positions and results correlated. There were no significant difference between measured absorbed dose by TLD and calculated doses by the TPS (p > 0.05 (1-tailed). Dose accuracy of up to 2.21% was found. The deviations from the calculated absorbed doses were overall larger (3.4%) when wedges and bolus were used. 3D radiotherapy TPS is a useful and accurate tool to assess the accuracy of surface dose. Our studies have shown that radiation treatment accuracy expressed as a comparison between calculated doses (by TPS) and measured doses (by TLD dosimetry) can be accurately predicted for tangential treatment of the chest wall after mastectomy.
Numerical Prediction of SERN Performance using WIND code
NASA Technical Reports Server (NTRS)
Engblom, W. A.
2003-01-01
Computational results are presented for the performance and flow behavior of single-expansion ramp nozzles (SERNs) during overexpanded operation and transonic flight. Three-dimensional Reynolds-Averaged Navier Stokes (RANS) results are obtained for two vehicle configurations, including the NASP Model 5B and ISTAR RBCC (a variant of X-43B) using the WIND code. Numerical predictions for nozzle integrated forces and pitch moments are directly compared to experimental data for the NASP Model 5B, and adequate-to-excellent agreement is found. The sensitivity of SERN performance and separation phenomena to freestream static pressure and Mach number is demonstrated via a matrix of cases for both vehicles. 3-D separation regions are shown to be induced by either lateral (e.g., sidewall) shocks or vertical (e.g., cowl trailing edge) shocks. Finally, the implications of this work to future preliminary design efforts involving SERNs are discussed.
Estimating 1 min rain rate distributions from numerical weather prediction
NASA Astrophysics Data System (ADS)
Paulson, Kevin S.
2017-01-01
Internationally recognized prognostic models of rain fade on terrestrial and Earth-space EHF links rely fundamentally on distributions of 1 min rain rates. Currently, in Rec. ITU-R P.837-6, these distributions are generated using the Salonen-Poiares Baptista method where 1 min rain rate distributions are estimated from long-term average annual accumulations provided by numerical weather prediction (NWP). This paper investigates an alternative to this method based on the distribution of 6 h accumulations available from the same NWPs. Rain rate fields covering the UK, produced by the Nimrod network of radars, are integrated to estimate the accumulations provided by NWP, and these are linked to distributions of fine-scale rain rates. The proposed method makes better use of the available data. It is verified on 15 NWP regions spanning the UK, and the extension to other regions is discussed.
Polzer, S; Gasser, T C; Novak, K; Man, V; Tichy, M; Skacel, P; Bursa, J
2015-03-01
Structure-based constitutive models might help in exploring mechanisms by which arterial wall histology is linked to wall mechanics. This study aims to validate a recently proposed structure-based constitutive model. Specifically, the model's ability to predict mechanical biaxial response of porcine aortic tissue with predefined collagen structure was tested. Histological slices from porcine thoracic aorta wall (n=9) were automatically processed to quantify the collagen fiber organization, and mechanical testing identified the non-linear properties of the wall samples (n=18) over a wide range of biaxial stretches. Histological and mechanical experimental data were used to identify the model parameters of a recently proposed multi-scale constitutive description for arterial layers. The model predictive capability was tested with respect to interpolation and extrapolation. Collagen in the media was predominantly aligned in circumferential direction (planar von Mises distribution with concentration parameter bM=1.03 ± 0.23), and its coherence decreased gradually from the luminal to the abluminal tissue layers (inner media, b=1.54 ± 0.40; outer media, b=0.72 ± 0.20). In contrast, the collagen in the adventitia was aligned almost isotropically (bA=0.27 ± 0.11), and no features, such as families of coherent fibers, were identified. The applied constitutive model captured the aorta biaxial properties accurately (coefficient of determination R(2)=0.95 ± 0.03) over the entire range of biaxial deformations and with physically meaningful model parameters. Good predictive properties, well outside the parameter identification space, were observed (R(2)=0.92 ± 0.04). Multi-scale constitutive models equipped with realistic micro-histological data can predict macroscopic non-linear aorta wall properties. Collagen largely defines already low strain properties of media, which explains the origin of wall anisotropy seen at this strain level. The structure and mechanical
Gosink, Luke; Bensema, Kevin; Pulsipher, Trenton; Obermaier, Harald; Henry, Michael; Childs, Hank; Joy, Kenneth I
2013-12-01
Numerical ensemble forecasting is a powerful tool that drives many risk analysis efforts and decision making tasks. These ensembles are composed of individual simulations that each uniquely model a possible outcome for a common event of interest: e.g., the direction and force of a hurricane, or the path of travel and mortality rate of a pandemic. This paper presents a new visual strategy to help quantify and characterize a numerical ensemble's predictive uncertainty: i.e., the ability for ensemble constituents to accurately and consistently predict an event of interest based on ground truth observations. Our strategy employs a Bayesian framework to first construct a statistical aggregate from the ensemble. We extend the information obtained from the aggregate with a visualization strategy that characterizes predictive uncertainty at two levels: at a global level, which assesses the ensemble as a whole, as well as a local level, which examines each of the ensemble's constituents. Through this approach, modelers are able to better assess the predictive strengths and weaknesses of the ensemble as a whole, as well as individual models. We apply our method to two datasets to demonstrate its broad applicability.
Laser Hardening Prediction Tool Based On a Solid State Transformations Numerical Model
Martinez, S.; Ukar, E.; Lamikiz, A.
2011-01-17
This paper presents a tool to predict hardening layer in selective laser hardening processes where laser beam heats the part locally while the bulk acts as a heat sink.The tool to predict accurately the temperature field in the workpiece is a numerical model that combines a three dimensional transient numerical solution for heating where is possible to introduce different laser sources. The thermal field was modeled using a kinetic model based on Johnson-Mehl-Avrami equation. Considering this equation, an experimental adjustment of transformation parameters was carried out to get the heating transformation diagrams (CHT). With the temperature field and CHT diagrams the model predicts the percentage of base material converted into austenite. These two parameters are used as first step to estimate the depth of hardened layer in the part.The model has been adjusted and validated with experimental data for DIN 1.2379, cold work tool steel typically used in mold and die making industry. This steel presents solid state diffusive transformations at relative low temperature. These transformations must be considered in order to get good accuracy of temperature field prediction during heating phase. For model validation, surface temperature measured by pyrometry, thermal field as well as the hardened layer obtained from metallographic study, were compared with the model data showing a good adjustment.
NASA Astrophysics Data System (ADS)
Powell, Jacob; Heider, Emily C.; Campiglia, Andres; Harper, James K.
2016-10-01
The ability of density functional theory (DFT) methods to predict accurate fluorescence spectra for polycyclic aromatic hydrocarbons (PAHs) is explored. Two methods, PBE0 and CAM-B3LYP, are evaluated both in the gas phase and in solution. Spectra for several of the most toxic PAHs are predicted and compared to experiment, including three isomers of C24H14 and a PAH containing heteroatoms. Unusually high-resolution experimental spectra are obtained for comparison by analyzing each PAH at 4.2 K in an n-alkane matrix. All theoretical spectra visually conform to the profiles of the experimental data but are systematically offset by a small amount. Specifically, when solvent is included the PBE0 functional overestimates peaks by 16.1 ± 6.6 nm while CAM-B3LYP underestimates the same transitions by 14.5 ± 7.6 nm. These calculated spectra can be empirically corrected to decrease the uncertainties to 6.5 ± 5.1 and 5.7 ± 5.1 nm for the PBE0 and CAM-B3LYP methods, respectively. A comparison of computed spectra in the gas phase indicates that the inclusion of n-octane shifts peaks by +11 nm on average and this change is roughly equivalent for PBE0 and CAM-B3LYP. An automated approach for comparing spectra is also described that minimizes residuals between a given theoretical spectrum and all available experimental spectra. This approach identifies the correct spectrum in all cases and excludes approximately 80% of the incorrect spectra, demonstrating that an automated search of theoretical libraries of spectra may eventually become feasible.
Wong, Florence; O’Leary, Jacqueline G; Reddy, K Rajender; Patton, Heather; Kamath, Patrick S; Fallon, Michael B; Garcia-Tsao, Guadalupe; Subramanian, Ram M.; Malik, Raza; Maliakkal, Benedict; Thacker, Leroy R; Bajaj, Jasmohan S
2015-01-01
Background & Aims A consensus conference proposed that cirrhosis-associated acute kidney injury (AKI) be defined as an increase in serum creatinine by >50% from the stable baseline value in <6 months or by ≥0.3mg/dL in <48 hrs. We prospectively evaluated the ability of these criteria to predict mortality within 30 days among hospitalized patients with cirrhosis and infection. Methods 337 patients with cirrhosis admitted with or developed an infection in hospital (56% men; 56±10 y old; model for end-stage liver disease score, 20±8) were followed. We compared data on 30-day mortality, hospital length-of-stay, and organ failure between patients with and without AKI. Results 166 (49%) developed AKI during hospitalization, based on the consensus criteria. Patients who developed AKI had higher admission Child-Pugh (11.0±2.1 vs 9.6±2.1; P<.0001), and MELD scores (23±8 vs17±7; P<.0001), and lower mean arterial pressure (81±16mmHg vs 85±15mmHg; P<.01) than those who did not. Also higher amongst patients with AKI were mortality in ≤30 days (34% vs 7%), intensive care unit transfer (46% vs 20%), ventilation requirement (27% vs 6%), and shock (31% vs 8%); AKI patients also had longer hospital stays (17.8±19.8 days vs 13.3±31.8 days) (all P<.001). 56% of AKI episodes were transient, 28% persistent, and 16% resulted in dialysis. Mortality was 80% among those without renal recovery, higher compared to partial (40%) or complete recovery (15%), or AKI-free patients (7%; P<.0001). Conclusions 30-day mortality is 10-fold higher among infected hospitalized cirrhotic patients with irreversible AKI than those without AKI. The consensus definition of AKI accurately predicts 30-day mortality, length of hospital stay, and organ failure. PMID:23999172
Cluster abundance in chameleon f(R) gravity I: toward an accurate halo mass function prediction
NASA Astrophysics Data System (ADS)
Cataneo, Matteo; Rapetti, David; Lombriser, Lucas; Li, Baojiu
2016-12-01
We refine the mass and environment dependent spherical collapse model of chameleon f(R) gravity by calibrating a phenomenological correction inspired by the parameterized post-Friedmann framework against high-resolution N-body simulations. We employ our method to predict the corresponding modified halo mass function, and provide fitting formulas to calculate the enhancement of the f(R) halo abundance with respect to that of General Relativity (GR) within a precision of lesssim 5% from the results obtained in the simulations. Similar accuracy can be achieved for the full f(R) mass function on the condition that the modeling of the reference GR abundance of halos is accurate at the percent level. We use our fits to forecast constraints on the additional scalar degree of freedom of the theory, finding that upper bounds competitive with current Solar System tests are within reach of cluster number count analyses from ongoing and upcoming surveys at much larger scales. Importantly, the flexibility of our method allows also for this to be applied to other scalar-tensor theories characterized by a mass and environment dependent spherical collapse.
Accurate prediction of band gaps and optical properties of HfO2
NASA Astrophysics Data System (ADS)
Ondračka, Pavel; Holec, David; Nečas, David; Zajíčková, Lenka
2016-10-01
We report on optical properties of various polymorphs of hafnia predicted within the framework of density functional theory. The full potential linearised augmented plane wave method was employed together with the Tran-Blaha modified Becke-Johnson potential (TB-mBJ) for exchange and local density approximation for correlation. Unit cells of monoclinic, cubic and tetragonal crystalline, and a simulated annealing-based model of amorphous hafnia were fully relaxed with respect to internal positions and lattice parameters. Electronic structures and band gaps for monoclinic, cubic, tetragonal and amorphous hafnia were calculated using three different TB-mBJ parametrisations and the results were critically compared with the available experimental and theoretical reports. Conceptual differences between a straightforward comparison of experimental measurements to a calculated band gap on the one hand and to a whole electronic structure (density of electronic states) on the other hand, were pointed out, suggesting the latter should be used whenever possible. Finally, dielectric functions were calculated at two levels, using the random phase approximation without local field effects and with a more accurate Bethe-Salpether equation (BSE) to account for excitonic effects. We conclude that a satisfactory agreement with experimental data for HfO2 was obtained only in the latter case.
Numerical prediction on the dispersion of pollutant particles
NASA Astrophysics Data System (ADS)
Osman, Kahar; Ali, Zairi; Ubaidullah, S.; Zahid, M. N.
2012-06-01
The increasing concern on air pollution has led people around the world to find more efficient ways to control the problem. Air dispersion modeling is proven to be one of the alternatives that provide economical ways to control the growing threat of air pollution. The objective of this research is to develop a practical numerical algorithm to predict the dispersion of pollutant particles around a specific source of emission. The source selected was a rubber wood manufacturing plant. Gaussian-plume model were used as air dispersion model due to its simplicity and generic application. Results of this study show the concentrations of the pollutant particles on ground level reached approximately 90μg/m3, compared with other software. This value surpasses the limit of 50μg/m3 stipulated by the National Ambient Air Quality Standard (NAAQS) and Recommended Malaysian Guidelines (RMG) set by Environment Department of Malaysia. The results also show high concentration of pollutant particles reading during dru seasons as compared to that of rainy seasons. In general, the developed algorithm is proven to be able to predict particles distribution around emitted source with acceptable accuracy.
Numerical prediction of rail roughness growth on tangent railway tracks
NASA Astrophysics Data System (ADS)
Nielsen, J. C. O.
2003-10-01
Growth of railhead roughness (irregularities, waviness) is predicted through numerical simulation of dynamic train-track interaction on tangent track. The hypothesis is that wear is caused by longitudinal slip due to driven wheelsets, and that wear is proportional to the longitudinal frictional power in the contact patch. Emanating from an initial roughness spectrum corresponding to a new or a recent ground rail, an initial roughness profile is determined. Wheel-rail contact forces, creepages and wear for one wheelset passage are calculated in relation to location along a discretely supported track model. The calculated wear is scaled by a chosen number of wheelset passages, and is then added to the initial roughness profile. Field observations of rail corrugation on a Dutch track are used to validate the simulation model. Results from the simulations predict a large roughness growth rate for wavelengths around 30-40 mm. The large growth in this wavelength interval is explained by a low track receptance near the sleepers around the pinned-pinned resonance frequency, in combination with a large number of driven passenger wheelset passages at uniform speed. The agreement between simulations and field measurements is good with respect to dominating roughness wavelength and annual wear rate. Remedies for reducing roughness growth are discussed.
Numerical predictions of hemodynamics following surgeries in cerebral aneurysms
NASA Astrophysics Data System (ADS)
Rayz, Vitaliy; Lawton, Michael; Boussel, Loic; Leach, Joseph; Acevedo, Gabriel; Halbach, Van; Saloner, David
2014-11-01
Large cerebral aneurysms present a danger of rupture or brain compression. In some cases, clinicians may attempt to change the pathological hemodynamics in order to inhibit disease progression. This can be achieved by changing the vascular geometry with an open surgery or by deploying a stent-like flow diverter device. Patient-specific CFD models can help evaluate treatment options by predicting flow regions that are likely to become occupied by thrombus (clot) following the procedure. In this study, alternative flow scenarios were modeled for several patients who underwent surgical treatment. Patient-specific geometries and flow boundary conditions were obtained from magnetic resonance angiography and velocimetry data. The Navier-Stokes equations were solved with a finite volume solver Fluent. A porous media approach was used to model flow-diverter devices. The advection-diffusion equation was solved in order to simulate contrast agent transport and the results were used to evaluate flow residence time changes. Thrombus layering was predicted in regions characterized by reduced velocities and shear stresses as well as increased flow residence time. The simulations indicated surgical options that could result in occlusion of vital arteries with thrombus. Numerical results were compared to experimental and clinical MRI data. The results demonstrate that image-based CFD models may help improve the outcome of surgeries in cerebral aneurysms. acknowledge R01HL115267.
Accurate prediction of V1 location from cortical folds in a surface coordinate system
Hinds, Oliver P.; Rajendran, Niranjini; Polimeni, Jonathan R.; Augustinack, Jean C.; Wiggins, Graham; Wald, Lawrence L.; Rosas, H. Diana; Potthast, Andreas; Schwartz, Eric L.; Fischl, Bruce
2008-01-01
Previous studies demonstrated substantial variability of the location of primary visual cortex (V1) in stereotaxic coordinates when linear volume-based registration is used to match volumetric image intensities (Amunts et al., 2000). However, other qualitative reports of V1 location (Smith, 1904; Stensaas et al., 1974; Rademacher et al., 1993) suggested a consistent relationship between V1 and the surrounding cortical folds. Here, the relationship between folds and the location of V1 is quantified using surface-based analysis to generate a probabilistic atlas of human V1. High-resolution (about 200 μm) magnetic resonance imaging (MRI) at 7 T of ex vivo human cerebral hemispheres allowed identification of the full area via the stria of Gennari: a myeloarchitectonic feature specific to V1. Separate, whole-brain scans were acquired using MRI at 1.5 T to allow segmentation and mesh reconstruction of the cortical gray matter. For each individual, V1 was manually identified in the high-resolution volume and projected onto the cortical surface. Surface-based intersubject registration (Fischl et al., 1999b) was performed to align the primary cortical folds of individual hemispheres to those of a reference template representing the average folding pattern. An atlas of V1 location was constructed by computing the probability of V1 inclusion for each cortical location in the template space. This probabilistic atlas of V1 exhibits low prediction error compared to previous V1 probabilistic atlases built in volumetric coordinates. The increased predictability observed under surface-based registration suggests that the location of V1 is more accurately predicted by the cortical folds than by the shape of the brain embedded in the volume of the skull. In addition, the high quality of this atlas provides direct evidence that surface-based intersubject registration methods are superior to volume-based methods at superimposing functional areas of cortex, and therefore are better
Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R
2016-01-25
Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in <1h compared to >3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required.
Bowler, Michael G.
2017-01-01
The humidity surrounding a sample is an important variable in scientific experiments. Biological samples in particular require not just a humid atmosphere but often a relative humidity (RH) that is in equilibrium with a stabilizing solution required to maintain the sample in the same state during measurements. The controlled dehydration of macromolecular crystals can lead to significant increases in crystal order, leading to higher diffraction quality. Devices that can accurately control the humidity surrounding crystals while monitoring diffraction have led to this technique being increasingly adopted, as the experiments become easier and more reproducible. Matching the RH to the mother liquor is the first step in allowing the stable mounting of a crystal. In previous work [Wheeler, Russi, Bowler & Bowler (2012). Acta Cryst. F68, 111–114], the equilibrium RHs were measured for a range of concentrations of the most commonly used precipitants in macromolecular crystallography and it was shown how these related to Raoult’s law for the equilibrium vapour pressure of water above a solution. However, a discrepancy between the measured values and those predicted by theory could not be explained. Here, a more precise humidity control device has been used to determine equilibrium RH points. The new results are in agreement with Raoult’s law. A simple argument in statistical mechanics is also presented, demonstrating that the equilibrium vapour pressure of a solvent is proportional to its mole fraction in an ideal solution: Raoult’s law. The same argument can be extended to the case where the solvent and solute molecules are of different sizes, as is the case with polymers. The results provide a framework for the correct maintenance of the RH surrounding a sample. PMID:28381983
Bowler, Michael G; Bowler, David R; Bowler, Matthew W
2017-04-01
The humidity surrounding a sample is an important variable in scientific experiments. Biological samples in particular require not just a humid atmosphere but often a relative humidity (RH) that is in equilibrium with a stabilizing solution required to maintain the sample in the same state during measurements. The controlled dehydration of macromolecular crystals can lead to significant increases in crystal order, leading to higher diffraction quality. Devices that can accurately control the humidity surrounding crystals while monitoring diffraction have led to this technique being increasingly adopted, as the experiments become easier and more reproducible. Matching the RH to the mother liquor is the first step in allowing the stable mounting of a crystal. In previous work [Wheeler, Russi, Bowler & Bowler (2012). Acta Cryst. F68, 111-114], the equilibrium RHs were measured for a range of concentrations of the most commonly used precipitants in macromolecular crystallography and it was shown how these related to Raoult's law for the equilibrium vapour pressure of water above a solution. However, a discrepancy between the measured values and those predicted by theory could not be explained. Here, a more precise humidity control device has been used to determine equilibrium RH points. The new results are in agreement with Raoult's law. A simple argument in statistical mechanics is also presented, demonstrating that the equilibrium vapour pressure of a solvent is proportional to its mole fraction in an ideal solution: Raoult's law. The same argument can be extended to the case where the solvent and solute molecules are of different sizes, as is the case with polymers. The results provide a framework for the correct maintenance of the RH surrounding a sample.
Numerical modelling methods for predicting antenna performance on aircraft
NASA Astrophysics Data System (ADS)
Kubina, S. J.
1983-09-01
Typical case studies that involve the application of Moment Methods to the prediction of the radiation characteristics of antennas in the HF frequency band are examined. The examples consist of the analysis of a shorted transmission line HF antenna on a CHSS-2/Sea King helicopter, wire antennas on the CP-140/Aurora patrol aircraft and a long dipole antenna on the Space Shuttle Orbiter spacecraft. In each of these cases the guidelines for antenna modeling by the use of the program called the Numerical Electromagnetic Code are progressively applied and results are compared to measurements made by the use of scale-model techniques. In complex examples of this type comparisons based on individual radiation patterns are insufficient for the validation of computer models. A volumetric method of radiation pattern comparison is used based on criteria that result from pattern integration and that are related to communication system performance. This is supplemented by hidden-surface displays of an entire set of conical radiation patterns resulting from measurements and computations. Antenna coupling considerations are discussed for the case of the dual HF installation on the CP-140/Aurora aircraft.
Garcia Lopez, Sebastian; Kim, Philip M.
2014-01-01
Advances in sequencing have led to a rapid accumulation of mutations, some of which are associated with diseases. However, to draw mechanistic conclusions, a biochemical understanding of these mutations is necessary. For coding mutations, accurate prediction of significant changes in either the stability of proteins or their affinity to their binding partners is required. Traditional methods have used semi-empirical force fields, while newer methods employ machine learning of sequence and structural features. Here, we show how combining both of these approaches leads to a marked boost in accuracy. We introduce ELASPIC, a novel ensemble machine learning approach that is able to predict stability effects upon mutation in both, domain cores and domain-domain interfaces. We combine semi-empirical energy terms, sequence conservation, and a wide variety of molecular details with a Stochastic Gradient Boosting of Decision Trees (SGB-DT) algorithm. The accuracy of our predictions surpasses existing methods by a considerable margin, achieving correlation coefficients of 0.77 for stability, and 0.75 for affinity predictions. Notably, we integrated homology modeling to enable proteome-wide prediction and show that accurate prediction on modeled structures is possible. Lastly, ELASPIC showed significant differences between various types of disease-associated mutations, as well as between disease and common neutral mutations. Unlike pure sequence-based prediction methods that try to predict phenotypic effects of mutations, our predictions unravel the molecular details governing the protein instability, and help us better understand the molecular causes of diseases. PMID:25243403
DPIV prediction of flow induced platelet activation-comparison to numerical predictions.
Raz, Sagi; Einav, Shmuel; Alemu, Yared; Bluestein, Danny
2007-04-01
Flow induced platelet activation (PA) can lead to platelet aggregation, deposition onto the blood vessel wall, and thrombus formation. PA was thoroughly studied under unidirectional flow conditions. However, in regions of complex flow, where the platelet is exposed to varying levels of shear stress for varying durations, the relationship between flow and PA is not well understood. Numerical models were developed for studying flow induced PA resulting from stress histories along Lagrangian trajectories in the flow field. However, experimental validation techniques such as Digital Particle Image Velocimetry (DPIV) were not extended to include such models. In this study, a general experimental tool for PA analysis by means of continuous DPIV was utilized and compared to numerical simulation in a model of coronary stenosis. A scaled up (5:1) 84% eccentric and axisymetric coronary stenosis model was used for analysis of shear stress and exposure time along particle trajectories. Flow induced PA was measured using the PA State (PAS) assay. An algorithm for computing the PA level in pertinent trajectories was developed as a tool for extracting information from DPIV measurements for predicting the flow induced thrombogenic potential. CFD, DPIV and PAS assay results agreed well in predicting the level of PA. In addition, the same trend predicted by the DPIV was measured in vitro using the Platelet Activity State (PAS) assay, namely, that the symmetric stenosis activated the platelets more as compared to the eccentric stenosis.
NASA Astrophysics Data System (ADS)
Wagenbrenner, Natalie S.; Forthofer, Jason M.; Lamb, Brian K.; Shannon, Kyle S.; Butler, Bret W.
2016-04-01
Wind predictions in complex terrain are important for a number of applications. Dynamic downscaling of numerical weather prediction (NWP) model winds with a high-resolution wind model is one way to obtain a wind forecast that accounts for local terrain effects, such as wind speed-up over ridges, flow channeling in valleys, flow separation around terrain obstacles, and flows induced by local surface heating and cooling. In this paper we investigate the ability of a mass-consistent wind model for downscaling near-surface wind predictions from four NWP models in complex terrain. Model predictions are compared with surface observations from a tall, isolated mountain. Downscaling improved near-surface wind forecasts under high-wind (near-neutral atmospheric stability) conditions. Results were mixed during upslope and downslope (non-neutral atmospheric stability) flow periods, although wind direction predictions generally improved with downscaling. This work constitutes evaluation of a diagnostic wind model at unprecedented high spatial resolution in terrain with topographical ruggedness approaching that of typical landscapes in the western US susceptible to wildland fire.
NASA Technical Reports Server (NTRS)
Duque, Earl P. N.; Johnson, Wayne; vanDam, C. P.; Chao, David D.; Cortes, Regina; Yee, Karen
1999-01-01
Accurate, reliable and robust numerical predictions of wind turbine rotor power remain a challenge to the wind energy industry. The literature reports various methods that compare predictions to experiments. The methods vary from Blade Element Momentum Theory (BEM), Vortex Lattice (VL), to variants of Reynolds-averaged Navier-Stokes (RaNS). The BEM and VL methods consistently show discrepancies in predicting rotor power at higher wind speeds mainly due to inadequacies with inboard stall and stall delay models. The RaNS methodologies show promise in predicting blade stall. However, inaccurate rotor vortex wake convection, boundary layer turbulence modeling and grid resolution has limited their accuracy. In addition, the inherently unsteady stalled flow conditions become computationally expensive for even the best endowed research labs. Although numerical power predictions have been compared to experiment. The availability of good wind turbine data sufficient for code validation experimental data that has been extracted from the IEA Annex XIV download site for the NREL Combined Experiment phase II and phase IV rotor. In addition, the comparisons will show data that has been further reduced into steady wind and zero yaw conditions suitable for comparisons to "steady wind" rotor power predictions. In summary, the paper will present and discuss the capabilities and limitations of the three numerical methods and make available a database of experimental data suitable to help other numerical methods practitioners validate their own work.
Numerical Weather Predictions Evaluation Using Spatial Verification Methods
NASA Astrophysics Data System (ADS)
Tegoulias, I.; Pytharoulis, I.; Kotsopoulos, S.; Kartsios, S.; Bampzelis, D.; Karacostas, T.
2014-12-01
During the last years high-resolution numerical weather prediction simulations have been used to examine meteorological events with increased convective activity. Traditional verification methods do not provide the desired level of information to evaluate those high-resolution simulations. To assess those limitations new spatial verification methods have been proposed. In the present study an attempt is made to estimate the ability of the WRF model (WRF -ARW ver3.5.1) to reproduce selected days with high convective activity during the year 2010 using those feature-based verification methods. Three model domains, covering Europe, the Mediterranean Sea and northern Africa (d01), the wider area of Greece (d02) and central Greece - Thessaly region (d03) are used at horizontal grid-spacings of 15km, 5km and 1km respectively. By alternating microphysics (Ferrier, WSM6, Goddard), boundary layer (YSU, MYJ) and cumulus convection (Kain--Fritsch, BMJ) schemes, a set of twelve model setups is obtained. The results of those simulations are evaluated against data obtained using a C-Band (5cm) radar located at the centre of the innermost domain. Spatial characteristics are well captured but with a variable time lag between simulation results and radar data. Acknowledgements: This research is cofinanced by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007--2013).
[Application research of data assimilation in air pollution numerical prediction].
Bai, Xiao-ping; Li, Hong; Fang, Dong; Costabile, Francesca; Liu, Feng-lei
2008-02-01
Based on an air pollution modeling system coupling with the non-hydrostatic fifth generation mesoscale meteorological model (MM5) and the regional modeling system for aerosols and deposition (REMSAD), the forecast results of NOx and SO2 in August and September 2002 in Nanjing were assimilated with the optimal interpolation method and the ensemble Kalman filter. The results show that the improvement rates of deviation mean value of NOx and SO2 after assimilated with the optimal interpolation method are 34.20% and 47.53%, and the improvement rates of root mean square errors are 31.95% and 42.04% respectively. It is also demonstrated that the improvement rates of deviation mean value of NOx and SO2 after assimilated with the ensemble Kalman filter with 30 ensemble members are 26.73% and 60.75%, and the improvement rates of root mean square errors are 25.20% and 55.16% respectively. So, the optimal interpolation method and the ensemble Kalman filter both can improve the quality of the initial state from the air pollution numerical prediction model. The comparative experiments on the assimilation performance with the optimal interpolation method and the ensemble Kalman filter with 61 ensemble members were performed, and the experiments demonstrate that the assimilation performance of the ensemble Kalman filter with 61 ensemble members were improved compared with 30 ensemble members, and with the increase of the ensemble members, the improvement to the initial state of NOx and SO2 with the ensemble Kalman filter will be better than the optimal interpolation method.
NASA Astrophysics Data System (ADS)
Moczo, P.; Kristek, J.; Galis, M.; Pazak, P.
2009-12-01
Numerical prediction of earthquake ground motion in sedimentary basins and valleys often has to account for P-wave to S-wave speed ratios (Vp/Vs) as large as 5 and even larger, mainly in sediments below groundwater level. The ratio can attain values larger than 10 in unconsolidated sediments (e.g. in Ciudad de México). In a process of developing 3D optimally-accurate finite-difference schemes we encountered a serious problem with accuracy in media with large Vp/Vs ratio. This led us to investigate the very fundamental reasons for the inaccuracy. In order to identify the very basic inherent aspects of the numerical schemes responsible for their behavior with varying Vp/Vs ratio, we restricted to the most basic 2nd-order 2D numerical schemes on a uniform grid in a homogeneous medium. Although basic in the specified sense, the schemes comprise the decisive features for accuracy of wide class of numerical schemes. We investigated 6 numerical schemes: finite-difference_displacement_conventional grid (FD_D_CG) finite-element_Lobatto integration (FE_L) finite-element_Gauss integration (FE_G) finite-difference_displacement-stress_partly-staggered grid (FD_DS_PSG) finite-difference_displacement-stress_staggered grid (FD_DS_SG) finite-difference_velocity-stress_staggered grid (FD_VS_SG) We defined and calculated local errors of the schemes in amplitude and polarization. Because different schemes use different time steps, they need different numbers of time levels to calculate solution for a desired time window. Therefore, we normalized errors for a unit time. The normalization allowed for a direct comparison of errors of different schemes. Extensive numerical calculations for wide ranges of values of the Vp/Vs ratio, spatial sampling ratio, stability ratio, and entire range of directions of propagation with respect to the spatial grid led to interesting and surprising findings. Accuracy of FD_D_CG, FE_L and FE_G strongly depends on Vp/Vs ratio. The schemes are not
On the assimilation of satellite sounder data in cloudy skies in numerical weather prediction models
NASA Astrophysics Data System (ADS)
Li, Jun; Wang, Pei; Han, Hyojin; Li, Jinlong; Zheng, Jing
2016-04-01
Satellite measurements are an important source of global observations in support of numerical weather prediction (NWP). The assimilation of satellite radiances under clear skies has greatly improved NWP forecast scores. However, the application of radiances in cloudy skies remains a significant challenge. In order to better assimilate radiances in cloudy skies, it is very important to detect any clear field-of-view (FOV) accurately and assimilate cloudy radiances appropriately. Research progress on both clear FOV detection methodologies and cloudy radiance assimilation techniques are reviewed in this paper. Overview on approaches being implemented in the operational centers and studied by the satellite data assimilation research community is presented. Challenges and future directions for satellite sounder radiance assimilation in cloudy skies in NWP models are also discussed.
Improved Ecosystem Predictions of the California Current System via Accurate Light Calculations
2009-01-01
P., J. J. Walsh, D.A. Dieterle, and K. L. Carder, 1999a. Carbon cycling in the upper waters of the Sargasso Sea : I. Numerical simulation of...cycling in the upper waters of the Sargasso Sea : II. Numerical simulation of apparent and inherent optical properties. Deep- Sea Res., 46: 271-317...differential carbon and nitrogen fluxes. Deep- Sea Res. 46: 205-269. Bissett, W. P., K. L. Carder, J. J. Walsh, and D.A. Dieterle, 1999b. Carbon
Zhang, Jie; Draxl, Caroline; Hopson, Thomas; Monache, Luca Delle; Vanvyve, Emilie; Hodge, Bri-Mathias
2015-10-01
Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applications (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.
Draxl, C.; Churchfield, M.; Mirocha, J.; Lee, S.; Lundquist, J.; Michalakes, J.; Moriarty, P.; Purkayastha, A.; Sprague, M.; Vanderwende, B.
2014-06-01
Wind plant aerodynamics are influenced by a combination of microscale and mesoscale phenomena. Incorporating mesoscale atmospheric forcing (e.g., diurnal cycles and frontal passages) into wind plant simulations can lead to a more accurate representation of microscale flows, aerodynamics, and wind turbine/plant performance. Our goal is to couple a numerical weather prediction model that can represent mesoscale flow [specifically the Weather Research and Forecasting model] with a microscale LES model (OpenFOAM) that can predict microscale turbulence and wake losses.
Grassi, Lorenzo; Väänänen, Sami P; Ristinmaa, Matti; Jurvelin, Jukka S; Isaksson, Hanna
2016-03-21
Subject-specific finite element models have been proposed as a tool to improve fracture risk assessment in individuals. A thorough laboratory validation against experimental data is required before introducing such models in clinical practice. Results from digital image correlation can provide full-field strain distribution over the specimen surface during in vitro test, instead of at a few pre-defined locations as with strain gauges. The aim of this study was to validate finite element models of human femora against experimental data from three cadaver femora, both in terms of femoral strength and of the full-field strain distribution collected with digital image correlation. The results showed a high accuracy between predicted and measured principal strains (R(2)=0.93, RMSE=10%, 1600 validated data points per specimen). Femoral strength was predicted using a rate dependent material model with specific strain limit values for yield and failure. This provided an accurate prediction (<2% error) for two out of three specimens. In the third specimen, an accidental change in the boundary conditions occurred during the experiment, which compromised the femoral strength validation. The achieved strain accuracy was comparable to that obtained in state-of-the-art studies which validated their prediction accuracy against 10-16 strain gauge measurements. Fracture force was accurately predicted, with the predicted failure location being very close to the experimental fracture rim. Despite the low sample size and the single loading condition tested, the present combined numerical-experimental method showed that finite element models can predict femoral strength by providing a thorough description of the local bone mechanical response.
A machine learning approach to the accurate prediction of multi-leaf collimator positional errors
NASA Astrophysics Data System (ADS)
Carlson, Joel N. K.; Park, Jong Min; Park, So-Yeon; In Park, Jong; Choi, Yunseok; Ye, Sung-Joon
2016-03-01
Discrepancies between planned and delivered movements of multi-leaf collimators (MLCs) are an important source of errors in dose distributions during radiotherapy. In this work we used machine learning techniques to train models to predict these discrepancies, assessed the accuracy of the model predictions, and examined the impact these errors have on quality assurance (QA) procedures and dosimetry. Predictive leaf motion parameters for the models were calculated from the plan files, such as leaf position and velocity, whether the leaf was moving towards or away from the isocenter of the MLC, and many others. Differences in positions between synchronized DICOM-RT planning files and DynaLog files reported during QA delivery were used as a target response for training of the models. The final model is capable of predicting MLC positions during delivery to a high degree of accuracy. For moving MLC leaves, predicted positions were shown to be significantly closer to delivered positions than were planned positions. By incorporating predicted positions into dose calculations in the TPS, increases were shown in gamma passing rates against measured dose distributions recorded during QA delivery. For instance, head and neck plans with 1%/2 mm gamma criteria had an average increase in passing rate of 4.17% (SD = 1.54%). This indicates that the inclusion of predictions during dose calculation leads to a more realistic representation of plan delivery. To assess impact on the patient, dose volumetric histograms (DVH) using delivered positions were calculated for comparison with planned and predicted DVHs. In all cases, predicted dose volumetric parameters were in closer agreement to the delivered parameters than were the planned parameters, particularly for organs at risk on the periphery of the treatment area. By incorporating the predicted positions into the TPS, the treatment planner is given a more realistic view of the dose distribution as it will truly be
Margot Gerritsen
2008-10-31
Gas-injection processes are widely and increasingly used for enhanced oil recovery (EOR). In the United States, for example, EOR production by gas injection accounts for approximately 45% of total EOR production and has tripled since 1986. The understanding of the multiphase, multicomponent flow taking place in any displacement process is essential for successful design of gas-injection projects. Due to complex reservoir geometry, reservoir fluid properties and phase behavior, the design of accurate and efficient numerical simulations for the multiphase, multicomponent flow governing these processes is nontrivial. In this work, we developed, implemented and tested a streamline based solver for gas injection processes that is computationally very attractive: as compared to traditional Eulerian solvers in use by industry it computes solutions with a computational speed orders of magnitude higher and a comparable accuracy provided that cross-flow effects do not dominate. We contributed to the development of compositional streamline solvers in three significant ways: improvement of the overall framework allowing improved streamline coverage and partial streamline tracing, amongst others; parallelization of the streamline code, which significantly improves wall clock time; and development of new compositional solvers that can be implemented along streamlines as well as in existing Eulerian codes used by industry. We designed several novel ideas in the streamline framework. First, we developed an adaptive streamline coverage algorithm. Adding streamlines locally can reduce computational costs by concentrating computational efforts where needed, and reduce mapping errors. Adapting streamline coverage effectively controls mass balance errors that mostly result from the mapping from streamlines to pressure grid. We also introduced the concept of partial streamlines: streamlines that do not necessarily start and/or end at wells. This allows more efficient coverage and avoids
Wang, Jia-Nan; Jin, Jun-Ling; Geng, Yun; Sun, Shi-Ling; Xu, Hong-Liang; Lu, Ying-Hua; Su, Zhong-Min
2013-03-15
Recently, the extreme learning machine neural network (ELMNN) as a valid computing method has been proposed to predict the nonlinear optical property successfully (Wang et al., J. Comput. Chem. 2012, 33, 231). In this work, first, we follow this line of work to predict the electronic excitation energies using the ELMNN method. Significantly, the root mean square deviation of the predicted electronic excitation energies of 90 4,4-difluoro-4-bora-3a,4a-diaza-s-indacene (BODIPY) derivatives between the predicted and experimental values has been reduced to 0.13 eV. Second, four groups of molecule descriptors are considered when building the computing models. The results show that the quantum chemical descriptions have the closest intrinsic relation with the electronic excitation energy values. Finally, a user-friendly web server (EEEBPre: Prediction of electronic excitation energies for BODIPY dyes), which is freely accessible to public at the web site: http://202.198.129.218, has been built for prediction. This web server can return the predicted electronic excitation energy values of BODIPY dyes that are high consistent with the experimental values. We hope that this web server would be helpful to theoretical and experimental chemists in related research.
ERIC Educational Resources Information Center
Hall, Samuel R.; Stephens, Jonny R.; Seaby, Eleanor G.; Andrade, Matheus Gesteira; Lowry, Andrew F.; Parton, Will J. C.; Smith, Claire F.; Border, Scott
2016-01-01
It is important that clinicians are able to adequately assess their level of knowledge and competence in order to be safe practitioners of medicine. The medical literature contains numerous examples of poor self-assessment accuracy amongst medical students over a range of subjects however this ability in neuroanatomy has yet to be observed. Second…
Sensor data fusion for accurate cloud presence prediction using Dempster-Shafer evidence theory.
Li, Jiaming; Luo, Suhuai; Jin, Jesse S
2010-01-01
Sensor data fusion technology can be used to best extract useful information from multiple sensor observations. It has been widely applied in various applications such as target tracking, surveillance, robot navigation, signal and image processing. This paper introduces a novel data fusion approach in a multiple radiation sensor environment using Dempster-Shafer evidence theory. The methodology is used to predict cloud presence based on the inputs of radiation sensors. Different radiation data have been used for the cloud prediction. The potential application areas of the algorithm include renewable power for virtual power station where the prediction of cloud presence is the most challenging issue for its photovoltaic output. The algorithm is validated by comparing the predicted cloud presence with the corresponding sunshine occurrence data that were recorded as the benchmark. Our experiments have indicated that comparing to the approaches using individual sensors, the proposed data fusion approach can increase correct rate of cloud prediction by ten percent, and decrease unknown rate of cloud prediction by twenty three percent.
Forecasting irrigation demand by assimilating satellite images and numerical weather predictions
NASA Astrophysics Data System (ADS)
Pelosi, Anna; Medina, Hanoi; Villani, Paolo; Falanga Bolognesi, Salvatore; D'Urso, Guido; Battista Chirico, Giovanni
2016-04-01
Forecasting irrigation water demand, with small predictive uncertainty in the short-medium term, is fundamental for an efficient planning of water resource allocation among multiple users and for decreasing water and energy consumptions. In this study we present an innovative system for forecasting irrigation water demand, applicable at different spatial scales: from the farm level to the irrigation district level. The forecast system is centred on a crop growth model assimilating data from satellite images and numerical weather forecasts, according to a stochastic ensemble-based approach. Different sources of uncertainty affecting model predictions are represented by an ensemble of model trajectories, each generated by a possible realization of the model components (model parameters, input weather data and model state variables). The crop growth model is based on a set of simplified analytical relations, with the aim to assess biomass, leaf area index (LAI) growth and evapotranspiration rate with a daily time step. Within the crop growth model, LAI dynamics is let be governed by temperature and leaf dry matter supply, according to the development stage of the crop. The model assimilates LAI data retrieved from VIS-NIR high-resolution multispectral satellite images. Numerical weather model outputs are those from the European limited area ensemble prediction system (COSMO-LEPS), which provides forecasts up to five days with a spatial resolution of seven kilometres. Weather forecasts are sequentially bias corrected based on data from ground weather stations. The forecasting system is evaluated in experimental areas of southern Italy during three irrigation seasons. The performance analysis shows very accurate irrigation water demand forecasts, which make the proposed system a valuable support for water planning and saving at farm level as well as for water management at larger spatial scales.
Session on techniques and resources for storm-scale numerical weather prediction
NASA Technical Reports Server (NTRS)
Droegemeier, Kelvin
1993-01-01
The session on techniques and resources for storm-scale numerical weather prediction are reviewed. The recommendations of this group are broken down into three area: modeling and prediction, data requirements in support of modeling and prediction, and data management. The current status, modeling and technological recommendations, data requirements in support of modeling and prediction, and data management are addressed.
Numerical Weather Prediction Over Caucasus Region With Nested Grid Models
NASA Astrophysics Data System (ADS)
Davitashvili, Dr.; Kutaladze, Dr.; Kvatadze, Dr.
2010-09-01
territory of Georgia. Both use the default 31 vertical levels. We have studied the effect of thermal and advective-dynamic factors of atmosphere on the changes of the West Georgian climate. We have shown that non-proportional warming of the Black Sea and Colkhi lowland provokes the intensive strengthening of circulation. Some results of calculations of the interaction of airflow with complex orography of Caucasus with horizontal grid-point resolutions of 15 and 5 km are presented. Also with the purpose of study behavior of nested grid method above complex terrain we have elaborated in sigma coordinate system short term prediction regional numerical model for Caucasus region. The results of computation carried out with one directional, two directional and new combined methods are given.
A time accurate prediction of the viscous flow in a turbine stage including a rotor in motion
NASA Astrophysics Data System (ADS)
Shavalikul, Akamol
In this current study, the flow field in the Pennsylvania State University Axial Flow Turbine Research Facility (AFTRF) was simulated. This study examined four sets of simulations. The first two sets are for an individual NGV and for an individual rotor. The last two sets use a multiple reference frames approach for a complete turbine stage with two different interface models: a steady circumferential average approach called a mixing plane model, and a time accurate flow simulation approach called a sliding mesh model. The NGV passage flow field was simulated using a three-dimensional Reynolds Averaged Navier-Stokes finite volume solver (RANS) with a standard kappa -- epsilon turbulence model. The mean flow distributions on the NGV surfaces and endwall surfaces were computed. The numerical solutions indicate that two passage vortices begin to be observed approximately at the mid axial chord of the NGV suction surface. The first vortex is a casing passage vortex which occurs at the corner formed by the NGV suction surface and the casing. This vortex is created by the interaction of the passage flow and the radially inward flow, while the second vortex, the hub passage vortex, is observed near the hub. These two vortices become stronger towards the NGV trailing edge. By comparing the results from the X/Cx = 1.025 plane and the X/Cx = 1.09 plane, it can be concluded that the NGV wake decays rapidly within a short axial distance downstream of the NGV. For the rotor, a set of simulations was carried out to examine the flow fields associated with different pressure side tip extension configurations, which are designed to reduce the tip leakage flow. The simulation results show that significant reductions in tip leakage mass flow rate and aerodynamic loss reduction are possible by using suitable tip platform extensions located near the pressure side corner of the blade tip. The computations used realistic turbine rotor inlet flow conditions in a linear cascade arrangement
Wallace, Jason A; Wang, Yuhang; Shi, Chuanyin; Pastoor, Kevin J; Nguyen, Bao-Linh; Xia, Kai; Shen, Jana K
2011-12-01
Proton uptake or release controls many important biological processes, such as energy transduction, virus replication, and catalysis. Accurate pK(a) prediction informs about proton pathways, thereby revealing detailed acid-base mechanisms. Physics-based methods in the framework of molecular dynamics simulations not only offer pK(a) predictions but also inform about the physical origins of pK(a) shifts and provide details of ionization-induced conformational relaxation and large-scale transitions. One such method is the recently developed continuous constant pH molecular dynamics (CPHMD) method, which has been shown to be an accurate and robust pK(a) prediction tool for naturally occurring titratable residues. To further examine the accuracy and limitations of CPHMD, we blindly predicted the pK(a) values for 87 titratable residues introduced in various hydrophobic regions of staphylococcal nuclease and variants. The predictions gave a root-mean-square deviation of 1.69 pK units from experiment, and there were only two pK(a)'s with errors greater than 3.5 pK units. Analysis of the conformational fluctuation of titrating side-chains in the context of the errors of calculated pK(a) values indicate that explicit treatment of conformational flexibility and the associated dielectric relaxation gives CPHMD a distinct advantage. Analysis of the sources of errors suggests that more accurate pK(a) predictions can be obtained for the most deeply buried residues by improving the accuracy in calculating desolvation energies. Furthermore, it is found that the generalized Born implicit-solvent model underlying the current CPHMD implementation slightly distorts the local conformational environment such that the inclusion of an explicit-solvent representation may offer improvement of accuracy.
NESmapper: accurate prediction of leucine-rich nuclear export signals using activity-based profiles.
Kosugi, Shunichi; Yanagawa, Hiroshi; Terauchi, Ryohei; Tabata, Satoshi
2014-09-01
The nuclear export of proteins is regulated largely through the exportin/CRM1 pathway, which involves the specific recognition of leucine-rich nuclear export signals (NESs) in the cargo proteins, and modulates nuclear-cytoplasmic protein shuttling by antagonizing the nuclear import activity mediated by importins and the nuclear import signal (NLS). Although the prediction of NESs can help to define proteins that undergo regulated nuclear export, current methods of predicting NESs, including computational tools and consensus-sequence-based searches, have limited accuracy, especially in terms of their specificity. We found that each residue within an NES largely contributes independently and additively to the entire nuclear export activity. We created activity-based profiles of all classes of NESs with a comprehensive mutational analysis in mammalian cells. The profiles highlight a number of specific activity-affecting residues not only at the conserved hydrophobic positions but also in the linker and flanking regions. We then developed a computational tool, NESmapper, to predict NESs by using profiles that had been further optimized by training and combining the amino acid properties of the NES-flanking regions. This tool successfully reduced the considerable number of false positives, and the overall prediction accuracy was higher than that of other methods, including NESsential and Wregex. This profile-based prediction strategy is a reliable way to identify functional protein motifs. NESmapper is available at http://sourceforge.net/projects/nesmapper.
Kim, Minseung; Rai, Navneet; Zorraquino, Violeta; Tagkopoulos, Ilias
2016-01-01
A significant obstacle in training predictive cell models is the lack of integrated data sources. We develop semi-supervised normalization pipelines and perform experimental characterization (growth, transcriptional, proteome) to create Ecomics, a consistent, quality-controlled multi-omics compendium for Escherichia coli with cohesive meta-data information. We then use this resource to train a multi-scale model that integrates four omics layers to predict genome-wide concentrations and growth dynamics. The genetic and environmental ontology reconstructed from the omics data is substantially different and complementary to the genetic and chemical ontologies. The integration of different layers confers an incremental increase in the prediction performance, as does the information about the known gene regulatory and protein-protein interactions. The predictive performance of the model ranges from 0.54 to 0.87 for the various omics layers, which far exceeds various baselines. This work provides an integrative framework of omics-driven predictive modelling that is broadly applicable to guide biological discovery. PMID:27713404
NASA Technical Reports Server (NTRS)
Schonberg, William P.; Peck, Jeffrey A.
1992-01-01
Over the last three decades, multiwall structures have been analyzed extensively, primarily through experiment, as a means of increasing the protection afforded to spacecraft structure. However, as structural configurations become more varied, the number of tests required to characterize their response increases dramatically. As an alternative, numerical modeling of high-speed impact phenomena is often being used to predict the response of a variety of structural systems under impact loading conditions. This paper presents the results of a preliminary numerical/experimental investigation of the hypervelocity impact response of multiwall structures. The results of experimental high-speed impact tests are compared against the predictions of the HULL hydrodynamic computer code. It is shown that the hypervelocity impact response characteristics of a specific system cannot be accurately predicted from a limited number of HULL code impact simulations. However, if a wide range of impact loadings conditions are considered, then the ballistic limit curve of the system based on the entire series of numerical simulations can be used as a relatively accurate indication of actual system response.
NASA Astrophysics Data System (ADS)
Dale, Andy; Stolpovsky, Konstantin; Wallmann, Klaus
2016-04-01
The recycling and burial of biogenic material in the sea floor plays a key role in the regulation of ocean chemistry. Proper consideration of these processes in ocean biogeochemical models is becoming increasingly recognized as an important step in model validation and prediction. However, the rate of organic matter remineralization in sediments and the benthic flux of redox-sensitive elements are difficult to predict a priori. In this communication, examples of empirical benthic flux models that can be coupled to earth system models to predict sediment-water exchange in the open ocean are presented. Large uncertainties hindering further progress in this field include knowledge of the reactivity of organic carbon reaching the sediment, the importance of episodic variability in bottom water chemistry and particle rain rates (for both the deep-sea and margins) and the role of benthic fauna. How do we meet the challenge?
Koot, Yvonne E. M.; van Hooff, Sander R.; Boomsma, Carolien M.; van Leenen, Dik; Groot Koerkamp, Marian J. A.; Goddijn, Mariëtte; Eijkemans, Marinus J. C.; Fauser, Bart C. J. M.; Holstege, Frank C. P.; Macklon, Nick S.
2016-01-01
The primary limiting factor for effective IVF treatment is successful embryo implantation. Recurrent implantation failure (RIF) is a condition whereby couples fail to achieve pregnancy despite consecutive embryo transfers. Here we describe the collection of gene expression profiles from mid-luteal phase endometrial biopsies (n = 115) from women experiencing RIF and healthy controls. Using a signature discovery set (n = 81) we identify a signature containing 303 genes predictive of RIF. Independent validation in 34 samples shows that the gene signature predicts RIF with 100% positive predictive value (PPV). The strength of the RIF associated expression signature also stratifies RIF patients into distinct groups with different subsequent implantation success rates. Exploration of the expression changes suggests that RIF is primarily associated with reduced cellular proliferation. The gene signature will be of value in counselling and guiding further treatment of women who fail to conceive upon IVF and suggests new avenues for developing intervention. PMID:26797113
Victora, Andrea; Möller, Heiko M.; Exner, Thomas E.
2014-01-01
NMR chemical shift predictions based on empirical methods are nowadays indispensable tools during resonance assignment and 3D structure calculation of proteins. However, owing to the very limited statistical data basis, such methods are still in their infancy in the field of nucleic acids, especially when non-canonical structures and nucleic acid complexes are considered. Here, we present an ab initio approach for predicting proton chemical shifts of arbitrary nucleic acid structures based on state-of-the-art fragment-based quantum chemical calculations. We tested our prediction method on a diverse set of nucleic acid structures including double-stranded DNA, hairpins, DNA/protein complexes and chemically-modified DNA. Overall, our quantum chemical calculations yield highly/very accurate predictions with mean absolute deviations of 0.3–0.6 ppm and correlation coefficients (r2) usually above 0.9. This will allow for identifying misassignments and validating 3D structures. Furthermore, our calculations reveal that chemical shifts of protons involved in hydrogen bonding are predicted significantly less accurately. This is in part caused by insufficient inclusion of solvation effects. However, it also points toward shortcomings of current force fields used for structure determination of nucleic acids. Our quantum chemical calculations could therefore provide input for force field optimization. PMID:25404135
Dynamics of Flexible MLI-type Debris for Accurate Orbit Prediction
2014-09-01
SUBJECT TERMS EOARD, orbital debris , HAMR objects, multi-layered insulation, orbital dynamics, orbit predictions, orbital propagation 16. SECURITY...illustration are orbital debris [Souce: NASA...piece of space junk (a paint fleck) during the STS-7 mission (Photo: NASA Orbital Debris Program Office
Predicting repeat self-harm in children--how accurate can we expect to be?
Chitsabesan, Prathiba; Harrington, Richard; Harrington, Valerie; Tomenson, Barbara
2003-01-01
The main objective of the study was to find which variables predict repetition of deliberate self-harm in children. The study is based on a group of children who took part in a randomized control trial investigating the effects of a home-based family intervention for children who had deliberately poisoned themselves. These children had a range of baseline and outcome measures collected on two occasions (two and six months follow-up). Outcome data were collected from 149 (92 %) of the initial 162 children over the six months. Twenty-three children made a further deliberate self-harm attempt within the follow-up period. A number of variables at baseline were found to be significantly associated with repeat self-harm. Parental mental health and a history of previous attempts were the strongest predictors. A model of prediction of further deliberate self-harm combining these significant individual variables produced a high positive predictive value (86 %) but had low sensitivity (28 %). Predicting repeat self-harm in children is difficult, even with a comprehensive series of assessments over multiple time points, and we need to adapt services with this in mind. We propose a model of service provision which takes these findings into account.
Hashmi, Muhammad Ali; Andreassend, Sarah K; Keyzers, Robert A; Lein, Matthias
2016-09-21
Despite advances in electronic structure theory the theoretical prediction of spectroscopic properties remains a computational challenge. This is especially true for natural products that exhibit very large conformational freedom and hence need to be sampled over many different accessible conformations. We report a strategy, which is able to predict NMR chemical shifts and more elusive properties like the optical rotation with great precision, through step-wise incremental increases of the conformational degrees of freedom. The application of this method is demonstrated for 3-epi-xestoaminol C, a chiral natural compound with a long, linear alkyl chain of 14 carbon atoms. Experimental NMR and [α]D values are reported to validate the results of the density functional theory calculations.
Numerical Prediction of Grid Erosion of Ion Engine
NASA Astrophysics Data System (ADS)
Miyasaka, Takeshi; Kobayashi, Tsutomu; Asato, Katsuo
With the increase of long-term space missions, the evaluation of lifetime of ion engines by numerical analyses becomes important. In order to develop a numerical code for the evaluation of ion engine lifetime, JIEDI (JAXA Ion Engine Development Initiative) tool development has been started. To evaluate the validities of boundary conditions such as upstream discharge region and downstream region conditions, a 3-dimensional full-particle code was developed. In the present study, the effects of the electron mass model introduced for shortening the calculation time were investigated. We found that there are differences in the distributions of charged particles and electric potential profiles in the downstream region among different electron masses. Consequently, the effects of electron mass on the energy peak of the ions impacting on the grid and the erosion distribution on the downstream surface of the accel grid were observed.
FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues.
El-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant
2016-01-01
A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein
Ding, Lei; Van Renterghem, Timothy; Botteldooren, Dick; Horoshenkov, Kirill; Khan, Amir
2013-12-01
The influence of loose plant leaves on the acoustic absorption of a porous substrate is experimentally and numerically studied. Such systems are typical in vegetative walls, where the substrate has strong acoustical absorbing properties. Both experiments in an impedance tube and theoretical predictions show that when a leaf is placed in front of such a porous substrate, its absorption characteristics markedly change (for normal incident sound). Typically, there is an unaffected change in the low frequency absorption coefficient (below 250 Hz), an increase in the middle frequency absorption coefficient (500-2000 Hz) and a decrease in the absorption at higher frequencies. The influence of leaves becomes most pronounced when the substrate has a low mass density. A combination of the Biot's elastic frame porous model, viscous damping in the leaf boundary layers and plate vibration theory is implemented via a finite-difference time-domain model, which is able to predict accurately the absorption spectrum of a leaf above a porous substrate system. The change in the absorption spectrum caused by the leaf vibration can be modeled reasonably well assuming the leaf and porous substrate properties are uniform.
Forecasting severe ice storms using numerical weather prediction: the March 2010 Newfoundland event
NASA Astrophysics Data System (ADS)
Hosek, J.; Musilek, P.; Lozowski, E.; Pytlak, P.
2011-02-01
The northeast coast of North America is frequently hit by severe ice storms. These freezing rain events can produce large ice accretions that damage structures, frequently power transmission and distribution infrastructure. For this reason, it is highly desirable to model and forecast such icing events, so that the consequent damages can be prevented or mitigated. The case study presented in this paper focuses on the March 2010 ice storm event that took place in eastern Newfoundland. We apply a combination of a numerical weather prediction model and an ice accretion algorithm to simulate a forecast of this event. The main goals of this study are to compare the simulated meteorological variables to observations, and to assess the ability of the model to accurately predict the ice accretion load for different forecast horizons. The duration and timing of the freezing rain event that occurred between the night of 4 March and the morning of 6 March was simulated well in all model runs. The total precipitation amounts in the model, however, differed by up to a factor of two from the observations. The accuracy of the model air temperature strongly depended on the forecast horizon, but it was acceptable for all simulation runs. The simulated accretion loads were also compared to the design values for power delivery structures in the region. The results indicated that the simulated values exceeded design criteria in the areas of reported damage and power outages.
4DVAR for Global Atmospheric Numerical Weather Prediction
2016-06-07
generation global atmospheric 4D variational (4DVAR) data assimilation system, NAVDAS-AR2. OBJECTIVES The objective of this project is to construct...and transition a 4DVAR global atmospheric data assimilation system for NOGAPS to the Fleet Numerical Meteorology and Oceanography Center (FNMOC...This system, NAVDAS-AR, represents the first operational, weak constraint, 4DVAR atmospheric data assimilation system in the world. In this context
Comparison of Experimental Diagnostic Signals with Numerical Predictions
NASA Astrophysics Data System (ADS)
Comer, K.; Turnbull, A. D.
1997-11-01
A new code has been written to compare experimental diagnostic signals with those predicted by stability code output and experimental equilibrium diagnostic signals such as SXR, ECE, BSE, and reflectometry. Comparison of expected and actual diagnostic signals will help distinguish or identify modes by the signals they produce, and will also help validate stability codes. Predicted diagnostic signals are obtained by taking the total time derivative of S, the signal amplitude, and assuming steady state conditions so that the partial time derivative can be set to zero. Multiplying by delta-time (Dt) results in δ S = tilde\\underlineξ \\cdot \\underlinenablaS, where δ S is the predicted diagnostic signal, tilde\\underlineξ is the plasma displacement predicted by various equilibrium codes (such as GATO or MARS), and \\underlinenablaS is the gradient of the equilibrium diagnostic signal. \\underlinenablaS may be obtained from an experimental equilibrium signal amplitude profile, or from a functional dependence of the signal amplitude on equilibrium temperature and density. Comparisons of predicted and actual signals from linear ideal and resistive codes show reasonable agreement with the measured signals in some cases, but there are also some significant discrepancies.
Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data.
Pagán, Josué; De Orbe, M Irene; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L; Mora, J Vivancos; Moya, José M; Ayala, José L
2015-06-30
Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives.
Accurate structure prediction of peptide–MHC complexes for identifying highly immunogenic antigens
Park, Min-Sun; Park, Sung Yong; Miller, Keith R.; Collins, Edward J.; Lee, Ha Youn
2013-11-01
Designing an optimal HIV-1 vaccine faces the challenge of identifying antigens that induce a broad immune capacity. One factor to control the breadth of T cell responses is the surface morphology of a peptide–MHC complex. Here, we present an in silico protocol for predicting peptide–MHC structure. A robust signature of a conformational transition was identified during all-atom molecular dynamics, which results in a model with high accuracy. A large test set was used in constructing our protocol and we went another step further using a blind test with a wild-type peptide and two highly immunogenic mutants, which predicted substantial conformational changes in both mutants. The center residues at position five of the analogs were configured to be accessible to solvent, forming a prominent surface, while the residue of the wild-type peptide was to point laterally toward the side of the binding cleft. We then experimentally determined the structures of the blind test set, using high resolution of X-ray crystallography, which verified predicted conformational changes. Our observation strongly supports a positive association of the surface morphology of a peptide–MHC complex to its immunogenicity. Our study offers the prospect of enhancing immunogenicity of vaccines by identifying MHC binding immunogens.
Asmadi, Aldi; Neumann, Marcus A; Kendrick, John; Girard, Pascale; Perrin, Marc-Antoine; Leusen, Frank J J
2009-12-24
In the 2007 blind test of crystal structure prediction hosted by the Cambridge Crystallographic Data Centre (CCDC), a hybrid DFT/MM method correctly ranked each of the four experimental structures as having the lowest lattice energy of all the crystal structures predicted for each molecule. The work presented here further validates this hybrid method by optimizing the crystal structures (experimental and submitted) of the first three CCDC blind tests held in 1999, 2001, and 2004. Except for the crystal structures of compound IX, all structures were reminimized and ranked according to their lattice energies. The hybrid method computes the lattice energy of a crystal structure as the sum of the DFT total energy and a van der Waals (dispersion) energy correction. Considering all four blind tests, the crystal structure with the lowest lattice energy corresponds to the experimentally observed structure for 12 out of 14 molecules. Moreover, good geometrical agreement is observed between the structures determined by the hybrid method and those measured experimentally. In comparison with the correct submissions made by the blind test participants, all hybrid optimized crystal structures (apart from compound II) have the smallest calculated root mean squared deviations from the experimentally observed structures. It is predicted that a new polymorph of compound V exists under pressure.
Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data
Pagán, Josué; Irene De Orbe, M.; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L.; Vivancos Mora, J.; Moya, José M.; Ayala, José L.
2015-01-01
Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103
Accurate prediction of drug-induced liver injury using stem cell-derived populations.
Szkolnicka, Dagmara; Farnworth, Sarah L; Lucendo-Villarin, Baltasar; Storck, Christopher; Zhou, Wenli; Iredale, John P; Flint, Oliver; Hay, David C
2014-02-01
Despite major progress in the knowledge and management of human liver injury, there are millions of people suffering from chronic liver disease. Currently, the only cure for end-stage liver disease is orthotopic liver transplantation; however, this approach is severely limited by organ donation. Alternative approaches to restoring liver function have therefore been pursued, including the use of somatic and stem cell populations. Although such approaches are essential in developing scalable treatments, there is also an imperative to develop predictive human systems that more effectively study and/or prevent the onset of liver disease and decompensated organ function. We used a renewable human stem cell resource, from defined genetic backgrounds, and drove them through developmental intermediates to yield highly active, drug-inducible, and predictive human hepatocyte populations. Most importantly, stem cell-derived hepatocytes displayed equivalence to primary adult hepatocytes, following incubation with known hepatotoxins. In summary, we have developed a serum-free, scalable, and shippable cell-based model that faithfully predicts the potential for human liver injury. Such a resource has direct application in human modeling and, in the future, could play an important role in developing renewable cell-based therapies.
NASA Astrophysics Data System (ADS)
de Pari, Luigi, Jr.
2009-11-01
A numerical modeling and simulation analysis was performed on the hot-direct extrusion process with the finite element modeling (FEM) software package, DEFORM(TM) 3-D for three case studies. The research demonstrated that a commercially available, industry-accepted numerical simulation software package can predict the material response and microstructure development with simple simulated state variables (i.e. strain, strain rate, and temperature) and easily measured initial material characteristics (e.g. grain diameter). The predicted state variables provided insight into sources for limited extrudate quality, aided in processing improvements, and were the primary variables used to predict material response. The analysis began with studying the influence of tool misalignment and the degree of billet upset on extrudate dimensional quality, measured in terms of tube eccentricity, for a copper tube case study. Under ideal upset and tool alignment conditions, the simulated eccentricity was minimized. If the mandrel had a misalignment that was within tolerance, the eccentricity initially was minor in comparison to the eccentricity produced toward the end of extrusion. Consequently, through the use of DEFORM(TM) 3-D the extrusion mechanics were understood and sources for tube eccentricity were identified. In the second case study, a flow stress model was developed as a function of the state variables for an as-cast homogenized magnesium alloy. The modeled flow stress curve reasonably agreed with experimental compression flow stress data. The model was then implemented into DEFORM(TM) 3-D to utilize the simulated state variables to examine the extrusion of an automobile structural component. It was concluded that once the initial material characteristics are accounted for in the flow stress model it will more accurately and efficiently predict the flow stress response for the actual material being considered than a generic experimental flow stress-based material library
A Review of Element-Based Galerkin Methods for Numerical Weather Prediction
2015-04-01
Weather Prediction 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK...Numerical Weather Prediction (NWP) is in a period of transition. As resolutions increase global models are moving towards fully nonhydrostatic dynamical...Review of numerical methods for nonhydrostatic weather prediction models Meteorol. Atmos. Phys. 82, 2003], this review discusses EBG methods as a viable
NASA Astrophysics Data System (ADS)
Nissley, Daniel A.; Sharma, Ajeet K.; Ahmed, Nabeel; Friedrich, Ulrike A.; Kramer, Günter; Bukau, Bernd; O'Brien, Edward P.
2016-02-01
The rates at which domains fold and codons are translated are important factors in determining whether a nascent protein will co-translationally fold and function or misfold and malfunction. Here we develop a chemical kinetic model that calculates a protein domain's co-translational folding curve during synthesis using only the domain's bulk folding and unfolding rates and codon translation rates. We show that this model accurately predicts the course of co-translational folding measured in vivo for four different protein molecules. We then make predictions for a number of different proteins in yeast and find that synonymous codon substitutions, which change translation-elongation rates, can switch some protein domains from folding post-translationally to folding co-translationally--a result consistent with previous experimental studies. Our approach explains essential features of co-translational folding curves and predicts how varying the translation rate at different codon positions along a transcript's coding sequence affects this self-assembly process.
Can tritiated water-dilution space accurately predict total body water in chukar partridges
Crum, B.G.; Williams, J.B.; Nagy, K.A.
1985-11-01
Total body water (TBW) volumes determined from the dilution space of injected tritiated water have consistently overestimated actual water volumes (determined by desiccation to constant mass) in reptiles and mammals, but results for birds are controversial. We investigated potential errors in both the dilution method and the desiccation method in an attempt to resolve this controversy. Tritiated water dilution yielded an accurate measurement of water mass in vitro. However, in vivo, this method yielded a 4.6% overestimate of the amount of water (3.1% of live body mass) in chukar partridges, apparently largely because of loss of tritium from body water to sites of dissociable hydrogens on body solids. An additional source of overestimation (approximately 2% of body mass) was loss of tritium to the solids in blood samples during distillation of blood to obtain pure water for tritium analysis. Measuring tritium activity in plasma samples avoided this problem but required measurement of, and correction for, the dry matter content in plasma. Desiccation to constant mass by lyophilization or oven-drying also overestimated the amount of water actually in the bodies of chukar partridges by 1.4% of body mass, because these values included water adsorbed onto the outside of feathers. When desiccating defeathered carcasses, oven-drying at 70 degrees C yielded TBW values identical to those obtained from lyophilization, but TBW was overestimated (0.5% of body mass) by drying at 100 degrees C due to loss of organic substances as well as water.
Barron, M R; Roch, A M; Waters, J A; Parikh, J A; DeWitt, J M; Al-Haddad, M A; Ceppa, E P; House, M G; Zyromski, N J; Nakeeb, A; Pitt, H A; Schmidt, C Max
2014-03-01
Main pancreatic duct (MPD) involvement is a well-demonstrated risk factor for malignancy in intraductal papillary mucinous neoplasm (IPMN). Preoperative radiographic determination of IPMN type is heavily relied upon in oncologic risk stratification. We hypothesized that radiographic assessment of MPD involvement in IPMN is an accurate predictor of pathological MPD involvement. Data regarding all patients undergoing resection for IPMN at a single academic institution between 1992 and 2012 were gathered prospectively. Retrospective analysis of imaging and pathologic data was undertaken. Preoperative classification of IPMN type was based on cross-sectional imaging (MRI/magnetic resonance cholangiopancreatography (MRCP) and/or CT). Three hundred sixty-two patients underwent resection for IPMN. Of these, 334 had complete data for analysis. Of 164 suspected branch duct (BD) IPMN, 34 (20.7%) demonstrated MPD involvement on final pathology. Of 170 patients with suspicion of MPD involvement, 50 (29.4%) demonstrated no MPD involvement. Of 34 patients with suspected BD-IPMN who were found to have MPD involvement on pathology, 10 (29.4%) had invasive carcinoma. Alternatively, 2/50 (4%) of the patients with suspected MPD involvement who ultimately had isolated BD-IPMN demonstrated invasive carcinoma. Preoperative radiographic IPMN type did not correlate with final pathology in 25% of the patients. In addition, risk of invasive carcinoma correlates with pathologic presence of MPD involvement.
Wang, Zhiheng; Yang, Qianqian; Li, Tonghua; Cong, Peisheng
2015-01-01
The precise prediction of protein intrinsically disordered regions, which play a crucial role in biological procedures, is a necessary prerequisite to further the understanding of the principles and mechanisms of protein function. Here, we propose a novel predictor, DisoMCS, which is a more accurate predictor of protein intrinsically disordered regions. The DisoMCS bases on an original multi-class conservative score (MCS) obtained by sequence-order/disorder alignment. Initially, near-disorder regions are defined on fragments located at both the terminus of an ordered region connecting a disordered region. Then the multi-class conservative score is generated by sequence alignment against a known structure database and represented as order, near-disorder and disorder conservative scores. The MCS of each amino acid has three elements: order, near-disorder and disorder profiles. Finally, the MCS is exploited as features to identify disordered regions in sequences. DisoMCS utilizes a non-redundant data set as the training set, MCS and predicted secondary structure as features, and a conditional random field as the classification algorithm. In predicted near-disorder regions a residue is determined as an order or a disorder according to the optimized decision threshold. DisoMCS was evaluated by cross-validation, large-scale prediction, independent tests and CASP (Critical Assessment of Techniques for Protein Structure Prediction) tests. All results confirmed that DisoMCS was very competitive in terms of accuracy of prediction when compared with well-established publicly available disordered region predictors. It also indicated our approach was more accurate when a query has higher homologous with the knowledge database. Availability The DisoMCS is available at http://cal.tongji.edu.cn/disorder/. PMID:26090958
Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.
2014-01-28
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.
NASA Astrophysics Data System (ADS)
Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.
2014-01-01
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.
Simple numerical method for predicting steady compressible flows
NASA Technical Reports Server (NTRS)
Von Lavante, E.; Melson, N. Duane
1987-01-01
The present numerical method for the solution of the isenthalpic form of the governing equations for compressible viscous and inviscid flows has its basis in the concept of flux vector splitting in its implicit form, and has been tested in the cases of several difficult viscous and inviscid configurations. An acceleration of time-marching to steady state is accomplished by implementing a multigrid procedure which effectively increases the convergence rate. The steady state results obtained are largely of good quality, and required only short computational times.
Techniques and resources for storm-scale numerical weather prediction
NASA Technical Reports Server (NTRS)
Droegemeier, Kelvin; Grell, Georg; Doyle, James; Soong, Su-Tzai; Skamarock, William; Bacon, David; Staniforth, Andrew; Crook, Andrew; Wilhelmson, Robert
1993-01-01
The topics discussed include the following: multiscale application of the 5th-generation PSU/NCAR mesoscale model, the coupling of nonhydrostatic atmospheric and hydrostatic ocean models for air-sea interaction studies; a numerical simulation of cloud formation over complex topography; adaptive grid simulations of convection; an unstructured grid, nonhydrostatic meso/cloud scale model; efficient mesoscale modeling for multiple scales using variable resolution; initialization of cloud-scale models with Doppler radar data; and making effective use of future computing architectures, networks, and visualization software.
Simple numerical method for predicting steady compressible flows
NASA Technical Reports Server (NTRS)
Vonlavante, Ernst; Nelson, N. Duane
1986-01-01
A numerical method for solving the isenthalpic form of the governing equations for compressible viscous and inviscid flows was developed. The method was based on the concept of flux vector splitting in its implicit form. The method was tested on several demanding inviscid and viscous configurations. Two different forms of the implicit operator were investigated. The time marching to steady state was accelerated by the implementation of the multigrid procedure. Its various forms very effectively increased the rate of convergence of the present scheme. High quality steady state results were obtained in most of the test cases; these required only short computational times due to the relative efficiency of the basic method.
NASA Astrophysics Data System (ADS)
Rahneshin, Vahid; Chierichetti, Maria
2016-09-01
In this paper, a combined numerical and experimental method, called Extended Load Confluence Algorithm, is presented to accurately predict the dynamic response of non-periodic structures when little or no information about the applied loads is available. This approach, which falls into the category of Shape Sensing methods, inputs limited experimental information acquired from sensors to a mapping algorithm that predicts the response at unmeasured locations. The proposed algorithm consists of three major cores: an experimental core for data acquisition, a numerical core based on Finite Element Method for modeling the structure, and a mapping algorithm that improves the numerical model based on a modal approach in the frequency domain. The robustness and precision of the proposed algorithm are verified through numerical and experimental examples. The results of this paper demonstrate that without a precise knowledge of the loads acting on the structure, the dynamic behavior of the system can be predicted in an effective and precise manner after just a few iterations.
Krokhotin, Andrey; Dokholyan, Nikolay V
2015-01-01
Computational methods can provide significant insights into RNA structure and dynamics, bridging the gap in our understanding of the relationship between structure and biological function. Simulations enrich and enhance our understanding of data derived on the bench, as well as provide feasible alternatives to costly or technically challenging experiments. Coarse-grained computational models of RNA are especially important in this regard, as they allow analysis of events occurring in timescales relevant to RNA biological function, which are inaccessible through experimental methods alone. We have developed a three-bead coarse-grained model of RNA for discrete molecular dynamics simulations. This model is efficient in de novo prediction of short RNA tertiary structure, starting from RNA primary sequences of less than 50 nucleotides. To complement this model, we have incorporated additional base-pairing constraints and have developed a bias potential reliant on data obtained from hydroxyl probing experiments that guide RNA folding to its correct state. By introducing experimentally derived constraints to our computer simulations, we are able to make reliable predictions of RNA tertiary structures up to a few hundred nucleotides. Our refined model exemplifies a valuable benefit achieved through integration of computation and experimental methods.
Bakhtiarizadeh, Mohammad Reza; Moradi-Shahrbabak, Mohammad; Ebrahimi, Mansour; Ebrahimie, Esmaeil
2014-09-07
Due to the central roles of lipid binding proteins (LBPs) in many biological processes, sequence based identification of LBPs is of great interest. The major challenge is that LBPs are diverse in sequence, structure, and function which results in low accuracy of sequence homology based methods. Therefore, there is a need for developing alternative functional prediction methods irrespective of sequence similarity. To identify LBPs from non-LBPs, the performances of support vector machine (SVM) and neural network were compared in this study. Comprehensive protein features and various techniques were employed to create datasets. Five-fold cross-validation (CV) and independent evaluation (IE) tests were used to assess the validity of the two methods. The results indicated that SVM outperforms neural network. SVM achieved 89.28% (CV) and 89.55% (IE) overall accuracy in identification of LBPs from non-LBPs and 92.06% (CV) and 92.90% (IE) (in average) for classification of different LBPs classes. Increasing the number and the range of extracted protein features as well as optimization of the SVM parameters significantly increased the efficiency of LBPs class prediction in comparison to the only previous report in this field. Altogether, the results showed that the SVM algorithm can be run on broad, computationally calculated protein features and offers a promising tool in detection of LBPs classes. The proposed approach has the potential to integrate and improve the common sequence alignment based methods.
Accurate Prediction of the Dynamical Changes within the Second PDZ Domain of PTP1e
Cilia, Elisa; Vuister, Geerten W.; Lenaerts, Tom
2012-01-01
Experimental NMR relaxation studies have shown that peptide binding induces dynamical changes at the side-chain level throughout the second PDZ domain of PTP1e, identifying as such the collection of residues involved in long-range communication. Even though different computational approaches have identified subsets of residues that were qualitatively comparable, no quantitative analysis of the accuracy of these predictions was thus far determined. Here, we show that our information theoretical method produces quantitatively better results with respect to the experimental data than some of these earlier methods. Moreover, it provides a global network perspective on the effect experienced by the different residues involved in the process. We also show that these predictions are consistent within both the human and mouse variants of this domain. Together, these results improve the understanding of intra-protein communication and allostery in PDZ domains, underlining at the same time the necessity of producing similar data sets for further validation of thses kinds of methods. PMID:23209399
Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.
2008-10-20
One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic
Advances and Challenges in Numerical Weather and Climate Prediction
NASA Astrophysics Data System (ADS)
Yu, Tsann-Wang
2010-10-01
In this review article, the dispersive nature of various waves that exist in the atmosphere is first reviewed. These waves include Rossby waves, Kelvin wave, acoustic wave, internal and external gravity waves and many others, whose intrinsic nature and great relevancy to weather and climate forecasts are described. This paper then describes the latest development in global observations and data analysis and assimilation methodologies. These include three-dimensional and four dimensional variational data assimilation systems that are being used in the world's major operational weather and climate forecasting centers. Some of the recent results in using novel atmospheric satellite and chemical observation data applied to these data assimilation systems and those from the latest development in high resolution modeling and the ensemble forecasting approach in the operational numerical weather forecasting centers are also presented. Finally, problems of inherent errors associated with initial conditions, and those associated with the coupling of dynamics and physics and their related numerical issues in variational data assimilation systems are discussed.
Numerical analysis and prediction of laser forming of thin plate
NASA Astrophysics Data System (ADS)
Tamsaout, Toufik; Amara, EL-Hachemi
2012-03-01
Laser forming is a technique consisting in the design and the construction of complex metallic work-pieces with special shapes, difficult to achieve with the conventional techniques. By using lasers, the main advantage of the process is that it is contactless and does not require any external force. It offers also more flexibility for a lower price. This kind of processing interests the industries that use the stamping or other costly ways for prototypes such as in the aero-spatial, automotive, naval and microelectronics industries. The analytical modeling of laser forming process is often complex or impossible to achieve, since the dimensions and the mechanical properties change with the time and in the space. Therefore, the numerical approach is more suitable for laser forming modeling. Our numerical study is divided into two models, the first one is a purely thermal treatment which allows the determination of the temperature field produced by a laser pass, and the second one consists in the thermomechanical coupling treatment. The temperature field resulting from the first stage is used to calculate the stress field, the deformations and the bending angle of the plate. The thermo-mechanical properties of material are isotropic, but temperature-dependant.
NASA Astrophysics Data System (ADS)
Rajab, Jasim M.; MatJafri, M. Z.; Lim, H. S.
2013-06-01
This study encompasses columnar ozone modelling in the peninsular Malaysia. Data of eight atmospheric parameters [air surface temperature (AST), carbon monoxide (CO), methane (CH4), water vapour (H2Ovapour), skin surface temperature (SSKT), atmosphere temperature (AT), relative humidity (RH), and mean surface pressure (MSP)] data set, retrieved from NASA's Atmospheric Infrared Sounder (AIRS), for the entire period (2003-2008) was employed to develop models to predict the value of columnar ozone (O3) in study area. The combined method, which is based on using both multiple regressions combined with principal component analysis (PCA) modelling, was used to predict columnar ozone. This combined approach was utilized to improve the prediction accuracy of columnar ozone. Separate analysis was carried out for north east monsoon (NEM) and south west monsoon (SWM) seasons. The O3 was negatively correlated with CH4, H2Ovapour, RH, and MSP, whereas it was positively correlated with CO, AST, SSKT, and AT during both the NEM and SWM season periods. Multiple regression analysis was used to fit the columnar ozone data using the atmospheric parameter's variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to acquire subsets of the predictor variables to be comprised in the linear regression model of the atmospheric parameter's variables. It was found that the increase in columnar O3 value is associated with an increase in the values of AST, SSKT, AT, and CO and with a drop in the levels of CH4, H2Ovapour, RH, and MSP. The result of fitting the best models for the columnar O3 value using eight of the independent variables gave about the same values of the R (≈0.93) and R2 (≈0.86) for both the NEM and SWM seasons. The common variables that appeared in both regression equations were SSKT, CH4 and RH, and the principal precursor of the columnar O3 value in both the NEM and SWM seasons was SSKT.
How Accurate Is the Prediction of Maximal Oxygen Uptake with Treadmill Testing?
Wicks, John R.; Oldridge, Neil B.
2016-01-01
Background Cardiorespiratory fitness measured by treadmill testing has prognostic significance in determining mortality with cardiovascular and other chronic disease states. The accuracy of a recently developed method for estimating maximal oxygen uptake (VO2peak), the heart rate index (HRI), is dependent only on heart rate (HR) and was tested against oxygen uptake (VO2), either measured or predicted from conventional treadmill parameters (speed, incline, protocol time). Methods The HRI equation, METs = 6 x HRI– 5, where HRI = maximal HR/resting HR, provides a surrogate measure of VO2peak. Forty large scale treadmill studies were identified through a systematic search using MEDLINE, Google Scholar and Web of Science in which VO2peak was either measured (TM-VO2meas; n = 20) or predicted (TM-VO2pred; n = 20) based on treadmill parameters. All studies were required to have reported group mean data of both resting and maximal HRs for determination of HR index-derived oxygen uptake (HRI-VO2). Results The 20 studies with measured VO2 (TM-VO2meas), involved 11,477 participants (median 337) with a total of 105,044 participants (median 3,736) in the 20 studies with predicted VO2 (TM-VO2pred). A difference of only 0.4% was seen between mean (±SD) VO2peak for TM- VO2meas and HRI-VO2 (6.51±2.25 METs and 6.54±2.28, respectively; p = 0.84). In contrast, there was a highly significant 21.1% difference between mean (±SD) TM-VO2pred and HRI-VO2 (8.12±1.85 METs and 6.71±1.92, respectively; p<0.001). Conclusion Although mean TM-VO2meas and HRI-VO2 were almost identical, mean TM-VO2pred was more than 20% greater than mean HRI-VO2. PMID:27875547
Hall, Samuel R; Stephens, Jonny R; Seaby, Eleanor G; Andrade, Matheus Gesteira; Lowry, Andrew F; Parton, Will J C; Smith, Claire F; Border, Scott
2016-10-01
It is important that clinicians are able to adequately assess their level of knowledge and competence in order to be safe practitioners of medicine. The medical literature contains numerous examples of poor self-assessment accuracy amongst medical students over a range of subjects however this ability in neuroanatomy has yet to be observed. Second year medical students attending neuroanatomy revision sessions at the University of Southampton and the competitors of the National Undergraduate Neuroanatomy Competition were asked to rate their level of knowledge in neuroanatomy. The responses from the former group were compared to performance on a ten item multiple choice question examination and the latter group were compared to their performance within the competition. In both cohorts, self-assessments of perceived level of knowledge correlated weakly to their performance in their respective objective knowledge assessments (r = 0.30 and r = 0.44). Within the NUNC, this correlation improved when students were instead asked to rate their performance on a specific examination within the competition (spotter, rS = 0.68; MCQ, rS = 0.58). Despite its inherent difficulty, medical student self-assessment accuracy in neuroanatomy is comparable to other subjects within the medical curriculum. Anat Sci Educ 9: 488-495. © 2016 American Association of Anatomists.
A Foundation for the Accurate Prediction of the Soft Error Vulnerability of Scientific Applications
Bronevetsky, G; de Supinski, B; Schulz, M
2009-02-13
Understanding the soft error vulnerability of supercomputer applications is critical as these systems are using ever larger numbers of devices that have decreasing feature sizes and, thus, increasing frequency of soft errors. As many large scale parallel scientific applications use BLAS and LAPACK linear algebra routines, the soft error vulnerability of these methods constitutes a large fraction of the applications overall vulnerability. This paper analyzes the vulnerability of these routines to soft errors by characterizing how their outputs are affected by injected errors and by evaluating several techniques for predicting how errors propagate from the input to the output of each routine. The resulting error profiles can be used to understand the fault vulnerability of full applications that use these routines.
Fast and Accurate Accessible Surface Area Prediction Without a Sequence Profile.
Faraggi, Eshel; Kouza, Maksim; Zhou, Yaoqi; Kloczkowski, Andrzej
2017-01-01
A fast accessible surface area (ASA) predictor is presented. In this new approach no residue mutation profiles generated by multiple sequence alignments are used as inputs. Instead, we use only single sequence information and global features such as single-residue and two-residue compositions of the chain. The resulting predictor is both highly more efficient than sequence alignment based predictors and of comparable accuracy to them. Introduction of the global inputs significantly helps achieve this comparable accuracy. The predictor, termed ASAquick, is found to perform similarly well for so-called easy and hard cases indicating generalizability and possible usability for de-novo protein structure prediction. The source code and a Linux executables for ASAquick are available from Research and Information Systems at http://mamiris.com and from the Battelle Center for Mathematical Medicine at http://mathmed.org .
Sequence features accurately predict genome-wide MeCP2 binding in vivo
Rube, H. Tomas; Lee, Wooje; Hejna, Miroslav; Chen, Huaiyang; Yasui, Dag H.; Hess, John F.; LaSalle, Janine M.; Song, Jun S.; Gong, Qizhi
2016-01-01
Methyl-CpG binding protein 2 (MeCP2) is critical for proper brain development and expressed at near-histone levels in neurons, but the mechanism of its genomic localization remains poorly understood. Using high-resolution MeCP2-binding data, we show that DNA sequence features alone can predict binding with 88% accuracy. Integrating MeCP2 binding and DNA methylation in a probabilistic graphical model, we demonstrate that previously reported genome-wide association with methylation is in part due to MeCP2's affinity to GC-rich chromatin, a result replicated using published data. Furthermore, MeCP2 co-localizes with nucleosomes. Finally, MeCP2 binding downstream of promoters correlates with increased expression in Mecp2-deficient neurons. PMID:27008915
NASA Astrophysics Data System (ADS)
Du, Xia; Zhao, Dong-Xia; Yang, Zhong-Zhi
2013-02-01
A new approach to characterize and measure bond strength has been developed. First, we propose a method to accurately calculate the potential acting on an electron in a molecule (PAEM) at the saddle point along a chemical bond in situ, denoted by Dpb. Then, a direct method to quickly evaluate bond strength is established. We choose some familiar molecules as models for benchmarking this method. As a practical application, the Dpb of base pairs in DNA along C-H and N-H bonds are obtained for the first time. All results show that C7-H of A-T and C8-H of G-C are the relatively weak bonds that are the injured positions in DNA damage. The significance of this work is twofold: (i) A method is developed to calculate Dpb of various sizable molecules in situ quickly and accurately; (ii) This work demonstrates the feasibility to quickly predict the bond strength in macromolecules.
NASA Astrophysics Data System (ADS)
Jin, Xuhon; Huang, Fei; Hu, Pengju; Cheng, Xiaoli
2016-11-01
A fundamental prerequisite for satellites operating in a Low Earth Orbit (LEO) is the availability of fast and accurate prediction of non-gravitational aerodynamic forces, which is characterised by the free molecular flow regime. However, conventional computational methods like the analytical integral method and direct simulation Monte Carlo (DSMC) technique are found failing to deal with flow shadowing and multiple reflections or computationally expensive. This work develops a general computer program for the accurate calculation of aerodynamic forces in the free molecular flow regime using the test particle Monte Carlo (TPMC) method, and non-gravitational aerodynamic forces actiong on the Gravity field and steady-state Ocean Circulation Explorer (GOCE) satellite is calculated for different freestream conditions and gas-surface interaction models by the computer program.
Resnic, F S; Ohno-Machado, L; Selwyn, A; Simon, D I; Popma, J J
2001-07-01
The objectives of this analysis were to develop and validate simplified risk score models for predicting the risk of major in-hospital complications after percutaneous coronary intervention (PCI) in the era of widespread stenting and use of glycoprotein IIb/IIIa antagonists. We then sought to compare the performance of these simplified models with those of full logistic regression and neural network models. From January 1, 1997 to December 31, 1999, data were collected on 4,264 consecutive interventional procedures at a single center. Risk score models were derived from multiple logistic regression models using the first 2,804 cases and then validated on the final 1,460 cases. The area under the receiver operating characteristic (ROC) curve for the risk score model that predicted death was 0.86 compared with 0.85 for the multiple logistic model and 0.83 for the neural network model (validation set). For the combined end points of death, myocardial infarction, or bypass surgery, the corresponding areas under the ROC curves were 0.74, 0.78, and 0.81, respectively. Previously identified risk factors were confirmed in this analysis. The use of stents was associated with a decreased risk of in-hospital complications. Thus, risk score models can accurately predict the risk of major in-hospital complications after PCI. Their discriminatory power is comparable to those of logistic models and neural network models. Accurate bedside risk stratification may be achieved with these simple models.
NASA Astrophysics Data System (ADS)
Moore, Christopher; Hopkins, Matthew; Moore, Stan; Boerner, Jeremiah; Cartwright, Keith
2015-09-01
Simulation of breakdown is important for understanding and designing a variety of applications such as mitigating undesirable discharge events. Such simulations need to be accurate through early time arc initiation to late time stable arc behavior. Here we examine constraints on the timestep and mesh size required for arc simulations using the particle-in-cell (PIC) method with direct simulation Monte Carlo (DMSC) collisions. Accurate simulation of electron avalanche across a fixed voltage drop and constant neutral density (reduced field of 1000 Td) was found to require a timestep ~ 1/100 of the mean time between collisions and a mesh size ~ 1/25 the mean free path. These constraints are much smaller than the typical PIC-DSMC requirements for timestep and mesh size. Both constraints are related to the fact that charged particles are accelerated by the external field. Thus gradients in the electron energy distribution function can exist at scales smaller than the mean free path and these must be resolved by the mesh size for accurate collision rates. Additionally, the timestep must be small enough that the particle energy change due to the fields be small in order to capture gradients in the cross sections versus energy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Predictive Lateral Logic for Numerical Entry Guidance Algorithms
NASA Technical Reports Server (NTRS)
Smith, Kelly M.
2016-01-01
Recent entry guidance algorithm development123 has tended to focus on numerical integration of trajectories onboard in order to evaluate candidate bank profiles. Such methods enjoy benefits such as flexibility to varying mission profiles and improved robustness to large dispersions. A common element across many of these modern entry guidance algorithms is a reliance upon the concept of Apollo heritage lateral error (or azimuth error) deadbands in which the number of bank reversals to be performed is non-deterministic. This paper presents a closed-loop bank reversal method that operates with a fixed number of bank reversals defined prior to flight. However, this number of bank reversals can be modified at any point, including in flight, based on contingencies such as fuel leaks where propellant usage must be minimized.
Lift capability prediction for helicopter rotor blade-numerical evaluation
NASA Astrophysics Data System (ADS)
Rotaru, Constantin; Cîrciu, Ionicǎ; Luculescu, Doru
2016-06-01
The main objective of this paper is to describe the key physical features for modelling the unsteady aerodynamic effects found on helicopter rotor blade operating under nominally attached flow conditions away from stall. The unsteady effects were considered as phase differences between the forcing function and the aerodynamic response, being functions of the reduced frequency, the Mach number and the mode forcing. For a helicopter rotor, the reduced frequency at any blade element can't be exactly calculated but a first order approximation for the reduced frequency gives useful information about the degree of unsteadiness. The sources of unsteady effects were decomposed into perturbations to the local angle of attack and velocity field. The numerical calculus and graphics were made in FLUENT and MAPLE soft environments. This mathematical model is applicable for aerodynamic design of wind turbine rotor blades, hybrid energy systems optimization and aeroelastic analysis.
NUMERICALLY PREDICTED INDIRECT SIGNATURES OF TERRESTRIAL PLANET FORMATION
Leinhardt, Zoë M.; Dobinson, Jack; Carter, Philip J.; Lines, Stefan
2015-06-10
The intermediate phases of planet formation are not directly observable due to lack of emission from planetesimals. Planet formation is, however, a dynamically active process resulting in collisions between the evolving planetesimals and the production of dust. Thus, indirect observation of planet formation may indeed be possible in the near future. In this paper we present synthetic observations based on numerical N-body simulations of the intermediate phase of planet formation including a state-of-the-art collision model, EDACM, which allows multiple collision outcomes, such as accretion, erosion, and bouncing events. We show that the formation of planetary embryos may be indirectly observable by a fully functioning ALMA telescope if the surface area involved in planetesimal evolution is sufficiently large and/or the amount of dust produced in the collisions is sufficiently high in mass.
Numerical prediction and potential vorticity diagnosis of extratropical cyclones
NASA Astrophysics Data System (ADS)
Huo, Zonghui
By combining numerical simulations with different diagnostic tools, this thesis examines the various aspects of two explosively deepening cyclones-the superstorm of March 12-14 1993 and a storm that occurred during the Intensive Observation Period 14 (IOP-14) of the Canadian Atlantic Storm Program (CASP). Using conventional observations, the general aspects of the storms are documented and the dynamical and physical mechanisms are discussed. Then the life cycles are simulated with the Canadian Regional Finite-Element model. To improve the model initial conditions, a methodology is proposed on the basis of potential vorticity thinking, and is tested to be successful in the simulation of the March 1993 superstorm. Using the successful simulations as control runs, a series of numerical sensitivity experiments are conducted to study the impacts of model physics on the development of the two rapidly deepening cyclones. The deepening mechanisms of both storms are examined within the context of PV thinking, i.e., using piecewise potential vorticity inversion diagnostics. In both cases, the upper-level PV anomalies contribute the most to the surface cyclone, followed by the lower-level thermal anomalies and diabatic heating related moist PV anomaly. It is found that a favorable phase tilt between the upper- and lower-level PV anomalies allows a mutual interaction between them, in which the circulations associated with the upper-level anomalies enhance the lower-level anomalies, which in turn feedback positively into the upper-level PV anomalies. In addition to the vertical interactions, there also exist lateral interactions between the upper-level PV anomalies for the March 1993 superstorm. The upper-level PV features (troughs) are isolated with the piecewise PV inversion. By removing or changing the intensity of the trough in the initial conditions, the RFE model is integrated to examine the impact of each trough and its interaction with the other trough on the superstorm
Integrative subcellular proteomic analysis allows accurate prediction of human disease-causing genes
Zhao, Li; Chen, Yiyun; Bajaj, Amol Onkar; Eblimit, Aiden; Xu, Mingchu; Soens, Zachry T.; Wang, Feng; Ge, Zhongqi; Jung, Sung Yun; He, Feng; Li, Yumei; Wensel, Theodore G.; Qin, Jun; Chen, Rui
2016-01-01
Proteomic profiling on subcellular fractions provides invaluable information regarding both protein abundance and subcellular localization. When integrated with other data sets, it can greatly enhance our ability to predict gene function genome-wide. In this study, we performed a comprehensive proteomic analysis on the light-sensing compartment of photoreceptors called the outer segment (OS). By comparing with the protein profile obtained from the retina tissue depleted of OS, an enrichment score for each protein is calculated to quantify protein subcellular localization, and 84% accuracy is achieved compared with experimental data. By integrating the protein OS enrichment score, the protein abundance, and the retina transcriptome, the probability of a gene playing an essential function in photoreceptor cells is derived with high specificity and sensitivity. As a result, a list of genes that will likely result in human retinal disease when mutated was identified and validated by previous literature and/or animal model studies. Therefore, this new methodology demonstrates the synergy of combining subcellular fractionation proteomics with other omics data sets and is generally applicable to other tissues and diseases. PMID:26912414
Accurate prediction of the refractive index of polymers using first principles and data modeling
NASA Astrophysics Data System (ADS)
Afzal, Mohammad Atif Faiz; Cheng, Chong; Hachmann, Johannes
Organic polymers with a high refractive index (RI) have recently attracted considerable interest due to their potential application in optical and optoelectronic devices. The ability to tailor the molecular structure of polymers is the key to increasing the accessible RI values. Our work concerns the creation of predictive in silico models for the optical properties of organic polymers, the screening of large-scale candidate libraries, and the mining of the resulting data to extract the underlying design principles that govern their performance. This work was set up to guide our experimentalist partners and allow them to target the most promising candidates. Our model is based on the Lorentz-Lorenz equation and thus includes the polarizability and number density values for each candidate. For the former, we performed a detailed benchmark study of different density functionals, basis sets, and the extrapolation scheme towards the polymer limit. For the number density we devised an exceedingly efficient machine learning approach to correlate the polymer structure and the packing fraction in the bulk material. We validated the proposed RI model against the experimentally known RI values of 112 polymers. We could show that the proposed combination of physical and data modeling is both successful and highly economical to characterize a wide range of organic polymers, which is a prerequisite for virtual high-throughput screening.
Slodownik, Dan; Grinberg, Igor; Spira, Ram M; Skornik, Yehuda; Goldstein, Ronald S
2009-04-01
The current standard method for predicting contact allergenicity is the murine local lymph node assay (LLNA). Public objection to the use of animals in testing of cosmetics makes the development of a system that does not use sentient animals highly desirable. The chorioallantoic membrane (CAM) of the chick egg has been extensively used for the growth of normal and transformed mammalian tissues. The CAM is not innervated, and embryos are sacrificed before the development of pain perception. The aim of this study was to determine whether the sensitization phase of contact dermatitis to known cosmetic allergens can be quantified using CAM-engrafted human skin and how these results compare with published EC3 data obtained with the LLNA. We studied six common molecules used in allergen testing and quantified migration of epidermal Langerhans cells (LC) as a measure of their allergic potency. All agents with known allergic potential induced statistically significant migration of LC. The data obtained correlated well with published data for these allergens generated using the LLNA test. The human-skin CAM model therefore has great potential as an inexpensive, non-radioactive, in vivo alternative to the LLNA, which does not require the use of sentient animals. In addition, this system has the advantage of testing the allergic response of human, rather than animal skin.
Searching for Computational Strategies to Accurately Predict pKas of Large Phenolic Derivatives.
Rebollar-Zepeda, Aida Mariana; Campos-Hernández, Tania; Ramírez-Silva, María Teresa; Rojas-Hernández, Alberto; Galano, Annia
2011-08-09
Twenty-two reaction schemes have been tested, within the cluster-continuum model including up to seven explicit water molecules. They have been used in conjunction with nine different methods, within the density functional theory and with second-order Møller-Plesset. The quality of the pKa predictions was found to be strongly dependent on the chosen scheme, while only moderately influenced by the method of calculation. We recommend the E1 reaction scheme [HA + OH(-) (3H2O) ↔ A(-) (H2O) + 3H2O], since it yields mean unsigned errors (MUE) lower than 1 unit of pKa for most of the tested functionals. The best pKa values obtained from this reaction scheme are those involving calculations with PBE0 (MUE = 0.77), TPSS (MUE = 0.82), BHandHLYP (MUE = 0.82), and B3LYP (MUE = 0.86) functionals. This scheme has the additional advantage, compared to the proton exchange method, which also gives very small values of MUE, of being experiment independent. It should be kept in mind, however, that these recommendations are valid within the cluster-continuum model, using the polarizable continuum model in conjunction with the united atom Hartree-Fock cavity and the strategy based on thermodynamic cycles. Changes in any of these aspects of the used methodology may lead to different outcomes.
Towards Relaxing the Spherical Solar Radiation Pressure Model for Accurate Orbit Predictions
NASA Astrophysics Data System (ADS)
Lachut, M.; Bennett, J.
2016-09-01
The well-known cannonball model has been used ubiquitously to capture the effects of atmospheric drag and solar radiation pressure on satellites and/or space debris for decades. While it lends itself naturally to spherical objects, its validity in the case of non-spherical objects has been debated heavily for years throughout the space situational awareness community. One of the leading motivations to improve orbit predictions by relaxing the spherical assumption, is the ongoing demand for more robust and reliable conjunction assessments. In this study, we explore the orbit propagation of a flat plate in a near-GEO orbit under the influence of solar radiation pressure, using a Lambertian BRDF model. Consequently, this approach will account for the spin rate and orientation of the object, which is typically determined in practice using a light curve analysis. Here, simulations will be performed which systematically reduces the spin rate to demonstrate the point at which the spherical model no longer describes the orbital elements of the spinning plate. Further understanding of this threshold would provide insight into when a higher fidelity model should be used, thus resulting in improved orbit propagations. Therefore, the work presented here is of particular interest to organizations and researchers that maintain their own catalog, and/or perform conjunction analyses.
Towards Accurate Prediction of Turbulent, Three-Dimensional, Recirculating Flows with the NCC
NASA Technical Reports Server (NTRS)
Iannetti, A.; Tacina, R.; Jeng, S.-M.; Cai, J.
2001-01-01
The National Combustion Code (NCC) was used to calculate the steady state, nonreacting flow field of a prototype Lean Direct Injection (LDI) swirler. This configuration used nine groups of eight holes drilled at a thirty-five degree angle to induce swirl. These nine groups created swirl in the same direction, or a corotating pattern. The static pressure drop across the holes was fixed at approximately four percent. Computations were performed on one quarter of the geometry, because the geometry is considered rotationally periodic every ninety degrees. The final computational grid used was approximately 2.26 million tetrahedral cells, and a cubic nonlinear k - epsilon model was used to model turbulence. The NCC results were then compared to time averaged Laser Doppler Velocimetry (LDV) data. The LDV measurements were performed on the full geometry, but four ninths of the geometry was measured. One-, two-, and three-dimensional representations of both flow fields are presented. The NCC computations compare both qualitatively and quantitatively well to the LDV data, but differences exist downstream. The comparison is encouraging, and shows that NCC can be used for future injector design studies. To improve the flow prediction accuracy of turbulent, three-dimensional, recirculating flow fields with the NCC, recommendations are given.
Numerical Simulation of Bolide Entry with Ground Footprint Prediction
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Nemec, Marian; Mathias, Donovan L.; Berger, Marsha J.
2016-01-01
As they decelerate through the atmosphere, meteors deposit mass, momentum and energy into the surrounding air at tremendous rates. Trauma from the entry of such bolides produces strong blast waves that can propagate hundreds of kilometers and cause substantial terrestrial damage even when no ground impact occurs. We present a new simulation technique for airburst blast prediction using a fully-conservative, Cartesian mesh, finite-volume solver and investigate the ability of this method to model far- field propagation over hundreds of kilometers. The work develops mathematical models for the deposition of mass, momentum and energy into the atmosphere and presents verification and validation through canonical problems and the comparison of surface overpressures, and blast arrival times with actual results in the literature for known bolides. The discussion also examines the effects of various approximations to the physics of bolide entry that can substantially decrease the computational expense of these simulations. We present parametric studies to quantify the influence of entry-angle, burst-height and other parameters on the ground footprint of the airburst, and these values are related to predictions from analytic and handbook-methods.
Numerical simulation of a twin screw expander for performance prediction
NASA Astrophysics Data System (ADS)
Papes, Iva; Degroote, Joris; Vierendeels, Jan
2015-08-01
With the increasing use of twin screw expanders in waste heat recovery applications, the performance prediction of these machines plays an important role. This paper presents a mathematical model for calculating the performance of a twin screw expander. From the mass and energy conservation laws, differential equations are derived which are then solved together with the appropriate Equation of State in the instantaneous control volumes. Different flow processes that occur inside the screw expander such as filling (accompanied by a substantial pressure loss) and leakage flows through the clearances are accounted for in the model. The mathematical model employs all geometrical parameters such as chamber volume, suction and leakage areas. With R245fa as working fluid, the Aungier Redlich-Kwong Equation of State has been used in order to include real gas effects. To calculate the mass flow rates through the leakage paths formed inside the screw expander, flow coefficients are considered as constant and they are derived from 3D Computational Fluid Dynamic calculations at given working conditions and applied to all other working conditions. The outcome of the mathematical model is the P-V indicator diagram which is compared to CFD results of the same twin screw expander. Since CFD calculations require significant computational time, developed mathematical model can be used for the faster performance prediction.
Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke
2015-11-15
attributed to phantom setup errors due to the slightly deformable and flexible phantom extremities. The estimated site-specific safety buffer distance with 0.001% probability of collision for (gantry-to-couch, gantry-to-phantom) was (1.23 cm, 3.35 cm), (1.01 cm, 3.99 cm), and (2.19 cm, 5.73 cm) for treatment to the head, lung, and prostate, respectively. Automated delivery to all three treatment sites was completed in 15 min and collision free using a digital Linac. Conclusions: An individualized collision prediction model for the purpose of noncoplanar beam delivery was developed and verified. With the model, the study has demonstrated the feasibility of predicting deliverable beams for an individual patient and then guiding fully automated noncoplanar treatment delivery. This work motivates development of clinical workflows and quality assurance procedures to allow more extensive use and automation of noncoplanar beam geometries.
Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke
2015-01-01
attributed to phantom setup errors due to the slightly deformable and flexible phantom extremities. The estimated site-specific safety buffer distance with 0.001% probability of collision for (gantry-to-couch, gantry-to-phantom) was (1.23 cm, 3.35 cm), (1.01 cm, 3.99 cm), and (2.19 cm, 5.73 cm) for treatment to the head, lung, and prostate, respectively. Automated delivery to all three treatment sites was completed in 15 min and collision free using a digital Linac. Conclusions: An individualized collision prediction model for the purpose of noncoplanar beam delivery was developed and verified. With the model, the study has demonstrated the feasibility of predicting deliverable beams for an individual patient and then guiding fully automated noncoplanar treatment delivery. This work motivates development of clinical workflows and quality assurance procedures to allow more extensive use and automation of noncoplanar beam geometries. PMID:26520735
Shakibaee, Abolfazl; Faghihzadeh, Soghrat; Alishiri, Gholam Hossein; Ebrahimpour, Zeynab; Faradjzadeh, Shahram; Sobhani, Vahid; Asgari, Alireza
2015-01-01
Background: The body composition varies according to different life styles (i.e. intake calories and caloric expenditure). Therefore, it is wise to record military personnel’s body composition periodically and encourage those who abide to the regulations. Different methods have been introduced for body composition assessment: invasive and non-invasive. Amongst them, the Jackson and Pollock equation is most popular. Objectives: The recommended anthropometric prediction equations for assessing men’s body composition were compared with dual-energy X-ray absorptiometry (DEXA) gold standard to develop a modified equation to assess body composition and obesity quantitatively among Iranian military men. Patients and Methods: A total of 101 military men aged 23 - 52 years old with a mean age of 35.5 years were recruited and evaluated in the present study (average height, 173.9 cm and weight, 81.5 kg). The body-fat percentages of subjects were assessed both with anthropometric assessment and DEXA scan. The data obtained from these two methods were then compared using multiple regression analysis. Results: The mean and standard deviation of body fat percentage of the DEXA assessment was 21.2 ± 4.3 and body fat percentage obtained from three Jackson and Pollock 3-, 4- and 7-site equations were 21.1 ± 5.8, 22.2 ± 6.0 and 20.9 ± 5.7, respectively. There was a strong correlation between these three equations and DEXA (R² = 0.98). Conclusions: The mean percentage of body fat obtained from the three equations of Jackson and Pollock was very close to that of body fat obtained from DEXA; however, we suggest using a modified Jackson-Pollock 3-site equation for volunteer military men because the 3-site equation analysis method is simpler and faster than other methods. PMID:26715964
Integrated numerical prediction of atomization process of liquid hydrogen jet
NASA Astrophysics Data System (ADS)
Ishimoto, Jun; Ohira, Katsuhide; Okabayashi, Kazuki; Chitose, Keiko
2008-05-01
The 3-D structure of the liquid atomization behavior of an LH jet flow through a pinhole nozzle is numerically investigated and visualized by a new type of integrated simulation technique. The present computational fluid dynamics (CFD) analysis focuses on the thermodynamic effect on the consecutive breakup of a cryogenic liquid column, the formation of a liquid film, and the generation of droplets in the outlet section of the pinhole nozzle. Utilizing the governing equations for a high-speed turbulent cryogenic jet flow through a pinhole nozzle based on the thermal nonequilibrium LES-VOF model in conjunction with the CSF model, an integrated parallel computation is performed to clarify the detailed atomization process of a high-speed LH2 jet flow through a pinhole nozzle and to acquire data, which is difficult to confirm by experiment, such as atomization length, liquid core shape, droplet-size distribution, spray angle, droplet velocity profiles, and thermal field surrounding the atomizing jet flow. According to the present computation, the cryogenic atomization rate and the LH2 droplets-gas two-phase flow characteristics are found to be controlled by the turbulence perturbation upstream of the pinhole nozzle, hydrodynamic instabilities at the gas-liquid interface and shear stress between the liquid core and the periphery of the LH2 jet. Furthermore, calculation of the effect of cryogenic atomization on the jet thermal field shows that such atomization extensively enhances the thermal diffusion surrounding the LH2 jet flow.
Analytical and numerical predictions of dendritic grain envelopes
Gandin, C.A.; Rappaz, M.; Schaefer, R.J.
1996-08-01
An analytical model is developed for the prediction of the shape of dendritic grain envelopes during solidification of a metallic alloy in a Bridgman configuration (i.e., constant thermal gradient and cooling rate). The assumptions built into the model allow a direct comparison of the results with those obtained from a previously developed cellular automation-finite element (CAFE) model. After this comparison, the CAFE model is applied to the study of the extension of a single grain into an open region of liquid after passing a re-entrant corner. The simulation results are compared with experimental observations made on a directionally solidified succinonitrile-acetone alloy. Good agreement is found for the shape of the grain envelopes when varying the orientation of the primary dendrites with respect to the thermal gradient direction, the velocity of the isotherms or the thermal gradient.
A numerical hemodynamic tool for predictive vascular surgery.
Marchandise, Emilie; Willemet, Marie; Lacroix, Valérie
2009-01-01
We suggest a new approach to peripheral vascular bypass surgery planning based on solving the one-dimensional (1D) governing equations of blood flow in patient-specific models. The aim of the present paper is twofold. First, we present the coupled 1D-0D model based on a discontinuous Galerkin method in a comprehensive manner, such as it becomes accessible to a wider community than the one of mathematicians and engineers. Then we show how this model can be applied to predict hemodynamic parameters and help therefore clinicians to choose for the best surgical option bettering the hemodynamics of a bypass. After presenting some benchmark problems, we apply our model to a real-life clinical application, i.e. a femoro-popliteal bypass surgery. Our model shows good agreement with preoperative and intraoperative measurements of velocity and pressure and post-surgical reports.
Deformation, Failure, and Fatigue Life of SiC/Ti-15-3 Laminates Accurately Predicted by MAC/GMC
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2002-01-01
NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) (ref.1) has been extended to enable fully coupled macro-micro deformation, failure, and fatigue life predictions for advanced metal matrix, ceramic matrix, and polymer matrix composites. Because of the multiaxial nature of the code's underlying micromechanics model, GMC--which allows the incorporation of complex local inelastic constitutive models--MAC/GMC finds its most important application in metal matrix composites, like the SiC/Ti-15-3 composite examined here. Furthermore, since GMC predicts the microscale fields within each constituent of the composite material, submodels for local effects such as fiber breakage, interfacial debonding, and matrix fatigue damage can and have been built into MAC/GMC. The present application of MAC/GMC highlights the combination of these features, which has enabled the accurate modeling of the deformation, failure, and life of titanium matrix composites.
Use of medium-range numerical weather prediction model output to produce forecasts of streamflow
Clark, M.P.; Hay, L.E.
2004-01-01
This paper examines an archive containing over 40 years of 8-day atmospheric forecasts over the contiguous United States from the NCEP reanalysis project to assess the possibilities for using medium-range numerical weather prediction model output for predictions of streamflow. This analysis shows the biases in the NCEP forecasts to be quite extreme. In many regions, systematic precipitation biases exceed 100% of the mean, with temperature biases exceeding 3??C. In some locations, biases are even higher. The accuracy of NCEP precipitation and 2-m maximum temperature forecasts is computed by interpolating the NCEP model output for each forecast day to the location of each station in the NWS cooperative network and computing the correlation with station observations. Results show that the accuracy of the NCEP forecasts is rather low in many areas of the country. Most apparent is the generally low skill in precipitation forecasts (particularly in July) and low skill in temperature forecasts in the western United States, the eastern seaboard, and the southern tier of states. These results outline a clear need for additional processing of the NCEP Medium-Range Forecast Model (MRF) output before it is used for hydrologic predictions. Techniques of model output statistics (MOS) are used in this paper to downscale the NCEP forecasts to station locations. Forecasted atmospheric variables (e.g., total column precipitable water, 2-m air temperature) are used as predictors in a forward screening multiple linear regression model to improve forecasts of precipitation and temperature for stations in the National Weather Service cooperative network. This procedure effectively removes all systematic biases in the raw NCEP precipitation and temperature forecasts. MOS guidance also results in substantial improvements in the accuracy of maximum and minimum temperature forecasts throughout the country. For precipitation, forecast improvements were less impressive. MOS guidance increases
Staranowicz, Aaron N; Ray, Christopher; Mariottini, Gian-Luca
2015-01-01
Falls are the most-common causes of unintentional injury and death in older adults. Many clinics, hospitals, and health-care providers are urgently seeking accurate, low-cost, and easy-to-use technology to predict falls before they happen, e.g., by monitoring the human walking pattern (or "gait"). Despite the wide popularity of Microsoft's Kinect and the plethora of solutions for gait monitoring, no strategy has been proposed to date to allow non-expert users to calibrate the cameras, which is essential to accurately fuse the body motion observed by each camera in a single frame of reference. In this paper, we present a novel multi-Kinect calibration algorithm that has advanced features when compared to existing methods: 1) is easy to use, 2) it can be used in any generic Kinect arrangement, and 3) it provides accurate calibration. Extensive real-world experiments have been conducted to validate our algorithm and to compare its performance against other multi-Kinect calibration approaches, especially to show the improved estimate of gait parameters. Finally, a MATLAB Toolbox has been made publicly available for the entire research community.
NASA Astrophysics Data System (ADS)
Russo, A.; Zuccarello, B.
2007-07-01
The paper presents a theoretical-numerical hybrid method for determining the stresses distribution in composite laminates containing a circular hole and subjected to uniaxial tensile loading. The method is based upon an appropriate corrective function allowing a simple and rapid evaluation of stress distributions in a generic plate of finite width with a hole based on the theoretical stresses distribution in an infinite plate with the same hole geometry and material. In order to verify the accuracy of the method proposed, various numerical and experimental tests have been performed by considering different laminate lay-ups; in particular, the experimental results have shown that a combined use of the method proposed and the well-know point-stress criterion leads to reliable strength predictions for GFRP or CFRP laminates with a circular hole.
An Analysis of Numerical Weather Prediction of the Diabatic Rossby Vortex
2014-06-01
NUMERICAL WEATHER PREDICTION OF THE DIABATIC ROSSBY VORTEX by Matthew W. McKenzie June 2014 Thesis Advisor: Richard W. Moore Second Reader...FUNDING NUMBERS 6. AUTHOR(S) MATTHEW W. MCKENZIE 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943...public release; distribution is unlimited AN ANALYSIS OF NUMERICAL WEATHER PREDICTION OF THE DIABATIC ROSSBY VORTEX Matthew W. McKenzie Lieutenant
Hourihan, Kathleen L.; Benjamin, Aaron S.; Liu, Xiping
2012-01-01
The Cross-Race Effect (CRE) in face recognition is the well-replicated finding that people are better at recognizing faces from their own race, relative to other races. The CRE reveals systematic limitations on eyewitness identification accuracy and suggests that some caution is warranted in evaluating cross-race identification. The CRE is a problem because jurors value eyewitness identification highly in verdict decisions. In the present paper, we explore how accurate people are in predicting their ability to recognize own-race and other-race faces. Caucasian and Asian participants viewed photographs of Caucasian and Asian faces, and made immediate judgments of learning during study. An old/new recognition test replicated the CRE: both groups displayed superior discriminability of own-race faces, relative to other-race faces. Importantly, relative metamnemonic accuracy was also greater for own-race faces, indicating that the accuracy of predictions about face recognition is influenced by race. This result indicates another source of concern when eliciting or evaluating eyewitness identification: people are less accurate in judging whether they will or will not recognize a face when that face is of a different race than they are. This new result suggests that a witness’s claim of being likely to recognize a suspect from a lineup should be interpreted with caution when the suspect is of a different race than the witness. PMID:23162788
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed.
Prediction of the Spring-back Calculated with Numerical Simulations for the Household Industry
NASA Astrophysics Data System (ADS)
Volk, Mihael; Deželak, Mihael; Nardin, Blaž; Stepišnik, Stanko
2011-08-01
Within industry, spring-back is a problem which has to be considered during the manufacturing of tools. Over the last few years this has become an even bigger problem because of more complex drawn-products and the discovery of new materials. The shapes of products depend significantly on the spring-back effect, therefore an accurate prediction of the spring-back effect plays a significant role during a product's development process. Spring-back in sheet-metal forming can be described as the change in sheet-metal's shape compared with the shapes of the tools after the forming process. Using numerical computer analyses, spring-back is a difficult problem to solve, especially when the tolerances of the component parts are very narrow. This research work presents the effects of different holding systems on the spring-back effect in sheet metal deep-drawing processes. A conventional holding system and a segmented holding system with distributed Blank holder forces (BHF) were evaluated. Analyses were carried out using computer finite element simulations, using the Pam-Stamp software package for calculating. The results were compared and the conclusions given.
Correa da Rosa, Joel; Kim, Jaehwan; Tian, Suyan; Tomalin, Lewis E; Krueger, James G; Suárez-Fariñas, Mayte
2017-02-01
There is an "assessment gap" between the moment a patient's response to treatment is biologically determined and when a response can actually be determined clinically. Patients' biochemical profiles are a major determinant of clinical outcome for a given treatment. It is therefore feasible that molecular-level patient information could be used to decrease the assessment gap. Thanks to clinically accessible biopsy samples, high-quality molecular data for psoriasis patients are widely available. Psoriasis is therefore an excellent disease for testing the prospect of predicting treatment outcome from molecular data. Our study shows that gene-expression profiles of psoriasis skin lesions, taken in the first 4 weeks of treatment, can be used to accurately predict (>80% area under the receiver operating characteristic curve) the clinical endpoint at 12 weeks. This could decrease the psoriasis assessment gap by 2 months. We present two distinct prediction modes: a universal predictor, aimed at forecasting the efficacy of untested drugs, and specific predictors aimed at forecasting clinical response to treatment with four specific drugs: etanercept, ustekinumab, adalimumab, and methotrexate. We also develop two forms of prediction: one from detailed, platform-specific data and one from platform-independent, pathway-based data. We show that key biomarkers are associated with responses to drugs and doses and thus provide insight into the biology of pathogenesis reversion.
TURBULENT LINEWIDTHS IN PROTOPLANETARY DISKS: PREDICTIONS FROM NUMERICAL SIMULATIONS
Simon, Jacob B.; Beckwith, Kris; Armitage, Philip J.
2011-12-10
Submillimeter observations of protoplanetary disks now approach the acuity needed to measure the turbulent broadening of molecular lines. These measurements constrain disk angular momentum transport, and furnish evidence of the turbulent environment within which planetesimal formation takes place. We use local magnetohydrodynamic (MHD) simulations of the magnetorotational instability (MRI) to predict the distribution of turbulent velocities in low-mass protoplanetary disks, as a function of radius and height above the mid-plane. We model both ideal MHD disks and disks in which Ohmic dissipation results in a dead zone of suppressed turbulence near the mid-plane. Under ideal conditions, the disk mid-plane is characterized by a velocity distribution that peaks near v {approx_equal} 0.1c{sub s} (where c{sub s} is the local sound speed), while supersonic velocities are reached at z > 3H (where H is the vertical pressure scale height). Residual velocities of v Almost-Equal-To 10{sup -2} c{sub s} persist near the mid-plane in dead zones, while the surface layers remain active. Anisotropic variation of the linewidth with disk inclination is modest. We compare our MHD results to hydrodynamic simulations in which large-scale forcing is used to initiate similar turbulent velocities. We show that the qualitative trend of increasing v with height, seen in the MHD case, persists for forced turbulence and is likely a generic property of disk turbulence. Percentage level determinations of v at different heights within the disk, or spatially resolved observations that probe the inner disk containing the dead zone region, are therefore needed to test whether the MRI is responsible for protoplanetary disk turbulence.
NASA Astrophysics Data System (ADS)
Nievinski, F. G.; Santos, M.
2006-05-01
We have been investigating the prediction of radio propagation delays due to neutral atmosphere via ray-tracing in Numerical Weather Prediction Models (NWP), aiming at improving kinematic positioning on medium- distance baselines. In this article we describe the developments in our ray-tracer since our latest publication (Nievinski et al., 2005). In our previous work we indicated the need to further investigate the transformation from line-of-sight distance to geopotential height, because we suspected it could be introducing biases at the centimetre level in the predicted delays. We tested seven different formulas. To validate that transformation to the vertical coordinate, we compared NWP-interpolated pressure values against pressure values measured at North American stations. We came up with two formulas that give better results than the one we have used before, one of which is both more accurate and faster than the previous one. Using this new formula we were able to reduce the bias in pressure to the milimetre level (converting from pressure to hydrostatic delay, for easier interpretation). To complete the validation of the transformation to the NWP coordinate space, we investigated the horizontal coordinates as well. We did so comparing the shorelines inferred from the NWP ground geopotential height field against the ones taken from a high-resolution vector database. We found unexpected discrepancies at the kilometer level (in a 15 to 20 km resolution model), due to different interpretations about the earth models used by the NWP-producing agency. Those discrepancies are critical on coastal and high slope areas, where the horizontal gradients for the weather parameters (e.g., for pressure) are especially high. From these two validations, we conclude that we should prefer to be consistent with the formulas used in the generation of the NWP, instead of using arguably more rigorous ones (from a geodetic point of view). In the past we have analyzed only short (1
NASA Astrophysics Data System (ADS)
Hughes, Timothy J.; Kandathil, Shaun M.; Popelier, Paul L. A.
2015-02-01
As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G**, B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol-1, decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol-1.
Hughes, Timothy J; Kandathil, Shaun M; Popelier, Paul L A
2015-02-05
As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G(**), B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol(-1), decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol(-1).
Nissley, Daniel A.; Sharma, Ajeet K.; Ahmed, Nabeel; Friedrich, Ulrike A.; Kramer, Günter; Bukau, Bernd; O'Brien, Edward P.
2016-01-01
The rates at which domains fold and codons are translated are important factors in determining whether a nascent protein will co-translationally fold and function or misfold and malfunction. Here we develop a chemical kinetic model that calculates a protein domain's co-translational folding curve during synthesis using only the domain's bulk folding and unfolding rates and codon translation rates. We show that this model accurately predicts the course of co-translational folding measured in vivo for four different protein molecules. We then make predictions for a number of different proteins in yeast and find that synonymous codon substitutions, which change translation-elongation rates, can switch some protein domains from folding post-translationally to folding co-translationally—a result consistent with previous experimental studies. Our approach explains essential features of co-translational folding curves and predicts how varying the translation rate at different codon positions along a transcript's coding sequence affects this self-assembly process. PMID:26887592
NASA Astrophysics Data System (ADS)
Chaljub, E. O.; Bard, P.; Tsuno, S.; Kristek, J.; Moczo, P.; Franek, P.; Hollender, F.; Manakou, M.; Raptakis, D.; Pitilakis, K.
2009-12-01
During the last decades, an important effort has been dedicated to develop accurate and computationally efficient numerical methods to predict earthquake ground motion in heterogeneous 3D media. The progress in methods and increasing capability of computers have made it technically feasible to calculate realistic seismograms for frequencies of interest in seismic design applications. In order to foster the use of numerical simulation in practical prediction, it is important to (1) evaluate the accuracy of current numerical methods when applied to realistic 3D applications where no reference solution exists (verification) and (2) quantify the agreement between recorded and numerically simulated earthquake ground motion (validation). Here we report the results of the Euroseistest verification and validation project - an ongoing international collaborative work organized jointly by the Aristotle University of Thessaloniki, Greece, the Cashima research project (supported by the French nuclear agency, CEA, and the Laue-Langevin institute, ILL, Grenoble), and the Joseph Fourier University, Grenoble, France. The project involves more than 10 international teams from Europe, Japan and USA. The teams employ the Finite Difference Method (FDM), the Finite Element Method (FEM), the Global Pseudospectral Method (GPSM), the Spectral Element Method (SEM) and the Discrete Element Method (DEM). The project makes use of a new detailed 3D model of the Mygdonian basin (about 5 km wide, 15 km long, sediments reach about 400 m depth, surface S-wave velocity is 200 m/s). The prime target is to simulate 8 local earthquakes with magnitude from 3 to 5. In the verification, numerical predictions for frequencies up to 4 Hz for a series of models with increasing structural and rheological complexity are analyzed and compared using quantitative time-frequency goodness-of-fit criteria. Predictions obtained by one FDM team and the SEM team are close and different from other predictions
TIMP2•IGFBP7 biomarker panel accurately predicts acute kidney injury in high-risk surgical patients
Gunnerson, Kyle J.; Shaw, Andrew D.; Chawla, Lakhmir S.; Bihorac, Azra; Al-Khafaji, Ali; Kashani, Kianoush; Lissauer, Matthew; Shi, Jing; Walker, Michael G.; Kellum, John A.
2016-01-01
BACKGROUND Acute kidney injury (AKI) is an important complication in surgical patients. Existing biomarkers and clinical prediction models underestimate the risk for developing AKI. We recently reported data from two trials of 728 and 408 critically ill adult patients in whom urinary TIMP2•IGFBP7 (NephroCheck, Astute Medical) was used to identify patients at risk of developing AKI. Here we report a preplanned analysis of surgical patients from both trials to assess whether urinary tissue inhibitor of metalloproteinase 2 (TIMP-2) and insulin-like growth factor–binding protein 7 (IGFBP7) accurately identify surgical patients at risk of developing AKI. STUDY DESIGN We enrolled adult surgical patients at risk for AKI who were admitted to one of 39 intensive care units across Europe and North America. The primary end point was moderate-severe AKI (equivalent to KDIGO [Kidney Disease Improving Global Outcomes] stages 2–3) within 12 hours of enrollment. Biomarker performance was assessed using the area under the receiver operating characteristic curve, integrated discrimination improvement, and category-free net reclassification improvement. RESULTS A total of 375 patients were included in the final analysis of whom 35 (9%) developed moderate-severe AKI within 12 hours. The area under the receiver operating characteristic curve for [TIMP-2]•[IGFBP7] alone was 0.84 (95% confidence interval, 0.76–0.90; p < 0.0001). Biomarker performance was robust in sensitivity analysis across predefined subgroups (urgency and type of surgery). CONCLUSION For postoperative surgical intensive care unit patients, a single urinary TIMP2•IGFBP7 test accurately identified patients at risk for developing AKI within the ensuing 12 hours and its inclusion in clinical risk prediction models significantly enhances their performance. LEVEL OF EVIDENCE Prognostic study, level I. PMID:26816218
Ydreborg, Magdalena; Lisovskaja, Vera; Lagging, Martin; Brehm Christensen, Peer; Langeland, Nina; Buhl, Mads Rauning; Pedersen, Court; Mørch, Kristine; Wejstål, Rune; Norkrans, Gunnar; Lindh, Magnus; Färkkilä, Martti; Westin, Johan
2014-01-01
Diagnosis of liver cirrhosis is essential in the management of chronic hepatitis C virus (HCV) infection. Liver biopsy is invasive and thus entails a risk of complications as well as a potential risk of sampling error. Therefore, non-invasive diagnostic tools are preferential. The aim of the present study was to create a model for accurate prediction of liver cirrhosis based on patient characteristics and biomarkers of liver fibrosis, including a panel of non-cholesterol sterols reflecting cholesterol synthesis and absorption and secretion. We evaluated variables with potential predictive significance for liver fibrosis in 278 patients originally included in a multicenter phase III treatment trial for chronic HCV infection. A stepwise multivariate logistic model selection was performed with liver cirrhosis, defined as Ishak fibrosis stage 5-6, as the outcome variable. A new index, referred to as Nordic Liver Index (NoLI) in the paper, was based on the model: Log-odds (predicting cirrhosis) = -12.17+ (age × 0.11) + (BMI (kg/m(2)) × 0.23) + (D7-lathosterol (μg/100 mg cholesterol)×(-0.013)) + (Platelet count (x10(9)/L) × (-0.018)) + (Prothrombin-INR × 3.69). The area under the ROC curve (AUROC) for prediction of cirrhosis was 0.91 (95% CI 0.86-0.96). The index was validated in a separate cohort of 83 patients and the AUROC for this cohort was similar (0.90; 95% CI: 0.82-0.98). In conclusion, the new index may complement other methods in diagnosing cirrhosis in patients with chronic HCV infection.
NASA Astrophysics Data System (ADS)
Margelowsky, G.; Foster, D.; Traykovski, P.; Felzenberg, J. A.
2010-12-01
The dynamics of wave-current and tidal flow bottom boundary layers are evaluated with a quasi-three-dimensional non-hydrostatic phase-resolving wave-current bottom boundary layer model, Dune. In each case, the model is evaluated with field observations of velocity profiles and seabed geometry. For wave-current boundary layers, the observations were obtained over a 26-day period in 13 m of water at the Martha’s Vineyard Coastal Observatory (MVCO, Edgartown, MA) in 2002 - 2003. Bedforms were orbital-scale ripples with wavelengths of 50-125 cm and heights of 5-20 cm with peak root-mean-square orbital velocities and mean flows typically ranging from 50-70 cm/s and 10-20 cm/s, respectively. The observations for tidal flows were obtained over a 3-day period in 13-16 m of water in Portsmouth Harbor (Portsmouth, NH) in 2008. Bedforms were dunes with wavelengths on the order of 1 m and heights on the order of 10 cm with typical peak tidal currents of approximately 1 m/s. The flow field is simulated with a finite volume approach to solve the Reynolds-Averaged Navier-Stokes equations with a k-ω 2nd order turbulence closure scheme. The model simulations are performed for a range of theoretical and observed bedforms to examine the boundary layer sensitivity to the resolution of the bottom roughness. The observed and predicted vertical velocity profiles are evaluated with correlations and Briar’s Skill scores over the range of data sets.
Numerical weather prediction in China in the new century—Progress, problems and prospects
NASA Astrophysics Data System (ADS)
Xue, Jishan; Liu, Yan
2007-11-01
This paper summarizes the recent progress of numerical weather prediction (NWP) research since the last review was published. The new generation NWP system named GRAPES (the Global and Regional Assimilation and Prediction System), which consists of variational or sequential data assimilation and nonhydrostatic prediction model with options of configuration for either global or regional domains, is briefly introduced, with stress on their scientific design and preliminary results during pre-operational implementation. In addition to the development of GRAPES, the achievements in new methodologies of data assimilation, new improvements of model physics such as parameterization of clouds and planetary boundary layer, mesoscale ensemble prediction system and numerical prediction of air quality are presented. The scientific issues which should be emphasized for the future are discussed finally.
Evaluating the use of high-resolution numerical weather forecast for debris flow prediction.
NASA Astrophysics Data System (ADS)
Nikolopoulos, Efthymios I.; Bartsotas, Nikolaos S.; Borga, Marco; Kallos, George
2015-04-01
The sudden occurrence combined with the high destructive power of debris flows pose a significant threat to human life and infrastructures. Therefore, developing early warning procedures for the mitigation of debris flows risk is of great economical and societal importance. Given that rainfall is the predominant factor controlling debris flow triggering, it is indisputable that development of effective debris flows warning procedures requires accurate knowledge of the properties (e.g. duration, intensity) of the triggering rainfall. Moreover, efficient and timely response of emergency operations depends highly on the lead-time provided by the warning systems. Currently, the majority of early warning systems for debris flows are based on nowcasting procedures. While the latter may be successful in predicting the hazard, they provide warnings with a relatively short lead-time (~6h). Increasing the lead-time is necessary in order to improve the pre-incident operations and communication of the emergency, thus coupling warning systems with weather forecasting is essential for advancing early warning procedures. In this work we evaluate the potential of using high-resolution (1km) rainfall fields forecasted with a state-of-the-art numerical weather prediction model (RAMS/ICLAMS), in order to predict the occurrence of debris flows. Analysis is focused over the Upper Adige region, Northeast Italy, an area where debris flows are frequent. Seven storm events that generated a large number (>80) of debris flows during the period 2007-2012 are analyzed. Radar-based rainfall estimates, available from the operational C-band radar located at Mt Macaion, are used as the reference to evaluate the forecasted rainfall fields. Evaluation is mainly focused on assessing the error in forecasted rainfall properties (magnitude, duration) and the correlation in space and time with the reference field. Results show that the forecasted rainfall fields captured very well the magnitude and
NASA Astrophysics Data System (ADS)
Liang, Yufeng; Vinson, John; Pemmaraju, Sri; Drisdell, Walter S.; Shirley, Eric L.; Prendergast, David
2017-03-01
Constrained-occupancy delta-self-consistent-field (Δ SCF ) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1 s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The Δ SCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle Δ SCF approach can be rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.
NASA Technical Reports Server (NTRS)
Thomas, P. D.
1980-01-01
A computer implemented numerical method for predicting the flow in and about an isolated three dimensional jet exhaust nozzle is summarized. The approach is based on an implicit numerical method to solve the unsteady Navier-Stokes equations in a boundary conforming curvilinear coordinate system. Recent improvements to the original numerical algorithm are summarized. Equations are given for evaluating nozzle thrust and discharge coefficient in terms of computed flowfield data. The final formulation of models that are used to simulate flow turbulence effect is presented. Results are presented from numerical experiments to explore the effect of various quantities on the rate of convergence to steady state and on the final flowfield solution. Detailed flowfield predictions for several two and three dimensional nozzle configurations are presented and compared with wind tunnel experimental data.
Numerical Prediction of Fatigue Damage Progress in Holed CFRP Laminates Using Cohesive Elements
NASA Astrophysics Data System (ADS)
Yashiro, Shigeki; Okabe, Tomonaga
This study presents a numerical simulation to predict damage progress in notched composite laminates under cyclic loading by using a cohesive zone model. A damage-mechanics concept was introduced directly into the fracture process in the cohesive elements in order to express crack growth by cyclic loading. This approach then conformed to the established damage mechanics and facilitated understanding the procedure and reducing computation costs. We numerically investigated the damage progress in holed CFRP cross-ply laminates under tensile cyclic loading and compared the predicted damage patterns with experiment results. The predicted damage patterns agreed with the experiment results that exhibited the extension of multiple types of damage (splits, transverse cracks, and delamination) near the hole. A numerical study indicated that the change in the distribution of in-plane shear stress due to delamination induced the extension of splits and transverse cracks near the hole.
First principles predictions of intrinsic defects in aluminum arsenide, AlAs : numerical supplement.
Schultz, Peter Andrew
2012-04-01
This Report presents numerical tables summarizing properties of intrinsic defects in aluminum arsenide, AlAs, as computed by density functional theory. This Report serves as a numerical supplement to the results published in: P.A. Schultz, 'First principles predictions of intrinsic defects in Aluminum Arsenide, AlAs', Materials Research Society Symposia Proceedings 1370 (2011; SAND2011-2436C), and intended for use as reference tables for a defect physics package in device models.
NASA Astrophysics Data System (ADS)
Mukkavilli, S. K.; Kay, M. J.; Taylor, R.; Prasad, A. A.; Troccoli, A.
2014-12-01
The Australian Solar Energy Forecasting System (ASEFS) project requires forecasting timeframes which range from nowcasting to long-term forecasts (minutes to two years). As concentrating solar power (CSP) plant operators are one of the key stakeholders in the national energy market, research and development enhancements for direct normal irradiance (DNI) forecasts is a major subtask. This project involves comparing different radiative scheme codes to improve day ahead DNI forecasts on the national supercomputing infrastructure running mesoscale simulations on NOAA's Weather Research & Forecast (WRF) model. ASEFS also requires aerosol data fusion for improving accurate representation of spatio-temporally variable atmospheric aerosols to reduce DNI bias error in clear sky conditions over southern Queensland & New South Wales where solar power is vulnerable to uncertainities from frequent aerosol radiative events such as bush fires and desert dust. Initial results from thirteen years of Bureau of Meteorology's (BOM) deseasonalised DNI and MODIS NASA-Terra aerosol optical depth (AOD) anomalies demonstrated strong negative correlations in north and southeast Australia along with strong variability in AOD (~0.03-0.05). Radiative transfer schemes, DNI and AOD anomaly correlations will be discussed for the population and transmission grid centric regions where current and planned CSP plants dispatch electricity to capture peak prices in the market. Aerosol and solar irradiance datasets include satellite and ground based assimilations from the national BOM, regional aerosol researchers and agencies. The presentation will provide an overview of this ASEFS project task on WRF and results to date. The overall goal of this ASEFS subtask is to develop a hybrid numerical weather prediction (NWP) and statistical/machine learning multi-model ensemble strategy that meets future operational requirements of CSP plant operators.
NASA Astrophysics Data System (ADS)
Sandu, Irina; Beljaars, Anton; Bechtold, Peter; Mauritsen, Thorsten; Balsamo, Gianpaolo
2013-06-01
In the 1990s, scientists at European Centre for Medium-Range Weather Forecasts (ECMWF) suggested that artificially enhancing turbulent diffusion in stable conditions improves the representation of two important aspects of weather forecasts, i.e., near-surface temperatures and synoptic cyclones. Since then, this practice has often been used for tuning the large-scale performance of operational numerical weather prediction (NWP) models, although it is widely recognized to be detrimental for an accurate representation of stable boundary layers. Here we investigate why, 20 years on, such a compromise is still needed in the ECMWF model. We find that reduced turbulent diffusion in stable conditions improves the representation of winds in stable boundary layers, but it deteriorates the large-scale flow and the near-surface temperatures. This suggests that enhanced diffusion is still needed to compensate for errors caused by other poorly represented processes. Among these, we identify the orographic drag, which influences the large-scale flow in a similar way to the turbulence closure for stable conditions, and the strength of the land-atmosphere coupling, which partially controls the near-surface temperatures. We also take a closer look at the relationship between the turbulence closure in stable conditions and the large-scale flow, which was not investigated in detail with a global NWP model. We demonstrate that the turbulent diffusion in stable conditions affects the large-scale flow by modulating not only the strength of synoptic cyclones and anticyclones, but also the amplitude of the planetary-scale standing waves.
A New Visibility Parameterization for Warm-Fog Applications in Numerical Weather Prediction Models
NASA Astrophysics Data System (ADS)
Gultepe, I.; Müller, M. D.; Boybeyi, Z.
2006-11-01
The objective of this work is to suggest a new warm-fog visibility parameterization scheme for numerical weather prediction (NWP) models. In situ observations collected during the Radiation and Aerosol Cloud Experiment, representing boundary layer low-level clouds, were used to develop a parameterization scheme between visibility and a combined parameter as a function of both droplet number concentration Nd and liquid water content (LWC). The current NWP models usually use relationships between extinction coefficient and LWC. A newly developed parameterization scheme for visibility, Vis = f(LWC, Nd), is applied to the NOAA Nonhydrostatic Mesoscale Model. In this model, the microphysics of fog was adapted from the 1D Parameterized Fog (PAFOG) model and then was used in the lower 1.5 km of the atmosphere. Simulations for testing the new parameterization scheme are performed in a 50-km innermost-nested simulation domain using a horizontal grid spacing of 1 km centered on Zurich Unique Airport in Switzerland. The simulations over a 10-h time period showed that visibility differences between old and new parameterization schemes can be more than 50%. It is concluded that accurate visibility estimates require skillful LWC as well as Nd estimates from forecasts. Therefore, the current models can significantly over-/underestimate Vis (with more than 50% uncertainty) depending on environmental conditions. Inclusion of Nd as a prognostic (or parameterized) variable in parameterizations would significantly improve the operational forecast models.
NASA Astrophysics Data System (ADS)
McCormack, J. P.; Allen, D. R.; Coy, L.; Eckermann, S. D.; Stajner, I.
2005-12-01
The Ozone Mapping and Profiler Suite (OMPS) will deliver real-time ozone data for assimilation in numerical weather prediction (NWP) models. This information will benefit forecasts by improving the modeled stratospheric heating rates and providing better first-guess temperature profiles needed for infrared satellite radiance retrieval algorithms. Operational ozone data assimilation for NWP requires a fast, accurate treatment of stratospheric ozone photochemistry. We present results from the new NRL CHEM2D Ozone Photochemistry Parameterization (CHEM2D-OPP), which is based on output from the zonally averaged NRL-CHEM2D middle atmosphere photochemical-transport model. CHEM2D-OPP is a linearized parameterization of gas-phase stratospheric ozone photochemistry developed for NOGAPS-ALPHA, the Navy's prototype global high altitude NWP model. A recent study of NOGAPS-ALPHA ozone simulations found that a preliminary version of the CHEM2D-based photochemistry parameterization generally performed better than other current photochemistry schemes that are now widely used in operational NWP and data assimilation systems. A new, improved version of CHEM2D-OPP is now available. Here we report the first quantitative performance assessments of the updated CHEM2D-OPP package in the NRL Global Ozone Assimilation Testing System (GOATS). This study compares the mean differences between GOATS ozone analyses and SBUV/2 ozone measurements (both vertical profile and total column) during September 2002 using several different ozone photochemistry schemes. We find that CHEM2D-OPP generally delivers the best performance out of all the photochemistry schemes we tested. Future development plans for CHEM2D-OPP, such as interfacing it with a "cold tracer" parameterization for heterogeneous ozone-hole chemistry, will also be presented.
A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals
NASA Technical Reports Server (NTRS)
Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.
1994-01-01
Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.
Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.
2015-01-01
database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887
Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J
2015-09-30
database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.
NASA Astrophysics Data System (ADS)
Lau, K.-C.; Ng, C. Y.
2006-01-01
The ionization energies (IEs) for the 2-propyl (2-C3H7), phenyl (C6H5), and benzyl (C6H5CH2) radicals have been calculated by the wave-function-based ab initio CCSD(T)/CBS approach, which involves the approximation to the complete basis set (CBS) limit at the coupled cluster level with single and double excitations plus quasiperturbative triple excitation [CCSD(T)]. The zero-point vibrational energy correction, the core-valence electronic correction, and the scalar relativistic effect correction have been also made in these calculations. Although a precise IE value for the 2-C3H7 radical has not been directly determined before due to the poor Franck-Condon factor for the photoionization transition at the ionization threshold, the experimental value deduced indirectly using other known energetic data is found to be in good accord with the present CCSD(T)/CBS prediction. The comparison between the predicted value through the focal-point analysis and the highly precise experimental value for the IE(C6H5CH2) determined in the previous pulsed field ionization photoelectron (PFI-PE) study shows that the CCSD(T)/CBS method is capable of providing an accurate IE prediction for C6H5CH2, achieving an error limit of 35 meV. The benchmarking of the CCSD(T)/CBS IE(C6H5CH2) prediction suggests that the CCSD(T)/CBS IE(C6H5) prediction obtained here has a similar accuracy of 35 meV. Taking into account this error limit for the CCSD(T)/CBS prediction and the experimental uncertainty, the CCSD(T)/CBS IE(C6H5) value is also consistent with the IE(C6H5) reported in the previous HeI photoelectron measurement. Furthermore, the present study provides support for the conclusion that the CCSD(T)/CBS approach with high-level energy corrections can be used to provide reliable IE predictions for C3-C7 hydrocarbon radicals with an uncertainty of +/-35 meV. Employing the atomization scheme, we have also computed the 0 K (298 K) heats of formation in kJ/mol at the CCSD(T)/CBS level for 2-C3H7
Numerical Prediction of Microstructure and Mechanical Properties During the Hot Stamping Process
NASA Astrophysics Data System (ADS)
Kan, Dongbin; Liu, Lizhong; Hu, Ping; Ma, Ning; Shen, Guozhe; Han, Xiaoqiang; Ying, Liang
2011-08-01
Numerical simulation and prediction of microstructures and mechanical properties of products is very important in product development of hot stamping parts. With this method we can easily design changes of hot stamping products' properties prior to the manufacturing stage and this offers noticeable time and cost savings. In the present work, the hot stamping process of a U-channel with 22MnB5 boron steels is simulated by using a coupled thermo-mechanical FEM program. Then with the temperature evolution results obtained from the simulation, a model is applied to predict the microstructure evolution during the hot stamping process and mechanical properties of this U-channel. The model consists of a phase transformation model and a mechanical properties prediction model. The phase transformation model which is proposed by Li et al is used to predict the austenite decomposition into ferrite, pearlite, and bainite during the cooling process. The diffusionless austenite-martensite transformation is modeled using the Koistinen and Marburger relation. The mechanical properties prediction model is applied to predict the products' hardness distribution. The numerical simulation is evaluated by comparing simulation results with the U-channel hot stamping experiment. The numerically obtained temperature history is basically in agreement with corresponding experimental observation. The evaluation indicates the feasibility of this set of methods to be used to guide the optimization of hot stamping process parameters and the design of hot stamping tools.
Arcon, Juan Pablo; Defelipe, Lucas A; Modenutti, Carlos P; López, Elias D; Alvarez-Garcia, Daniel; Barril, Xavier; Turjanski, Adrián G; Martí, Marcelo A
2017-03-31
One of the most important biological processes at the molecular level is the formation of protein-ligand complexes. Therefore, determining their structure and underlying key interactions is of paramount relevance and has direct applications in drug development. Because of its low cost relative to its experimental sibling, molecular dynamics (MD) simulations in the presence of different solvent probes mimicking specific types of interactions have been increasingly used to analyze protein binding sites and reveal protein-ligand interaction hot spots. However, a systematic comparison of different probes and their real predictive power from a quantitative and thermodynamic point of view is still missing. In the present work, we have performed MD simulations of 18 different proteins in pure water as well as water mixtures of ethanol, acetamide, acetonitrile and methylammonium acetate, leading to a total of 5.4 μs simulation time. For each system, we determined the corresponding solvent sites, defined as space regions adjacent to the protein surface where the probability of finding a probe atom is higher than that in the bulk solvent. Finally, we compared the identified solvent sites with 121 different protein-ligand complexes and used them to perform molecular docking and ligand binding free energy estimates. Our results show that combining solely water and ethanol sites allows sampling over 70% of all possible protein-ligand interactions, especially those that coincide with ligand-based pharmacophoric points. Most important, we also show how the solvent sites can be used to significantly improve ligand docking in terms of both accuracy and precision, and that accurate predictions of ligand binding free energies, along with relative ranking of ligand affinity, can be performed.
Webb-Robertson, Bobbie-Jo M.; Cannon, William R.; Oehmen, Christopher S.; Shah, Anuj R.; Gurumoorthi, Vidhya; Lipton, Mary S.; Waters, Katrina M.
2008-07-01
Motivation: The standard approach to identifying peptides based on accurate mass and elution time (AMT) compares these profiles obtained from a high resolution mass spectrometer to a database of peptides previously identified from tandem mass spectrometry (MS/MS) studies. It would be advantageous, with respect to both accuracy and cost, to only search for those peptides that are detectable by MS (proteotypic). Results: We present a Support Vector Machine (SVM) model that uses a simple descriptor space based on 35 properties of amino acid content, charge, hydrophilicity, and polarity for the quantitative prediction of proteotypic peptides. Using three independently derived AMT databases (Shewanella oneidensis, Salmonella typhimurium, Yersinia pestis) for training and validation within and across species, the SVM resulted in an average accuracy measure of ~0.8 with a standard deviation of less than 0.025. Furthermore, we demonstrate that these results are achievable with a small set of 12 variables and can achieve high proteome coverage. Availability: http://omics.pnl.gov/software/STEPP.php
On the potential use of radar-derived information in operational numerical weather prediction
NASA Technical Reports Server (NTRS)
Mcpherson, R. D.
1986-01-01
Estimates of requirements likely to be levied on a new observing system for mesoscale meteonology are given. Potential observing systems for mesoscale numerical weather prediction are discussed. Thermodynamic profiler radiometers, infrared radiometer atmospheric sounders, Doppler radar wind profilers and surveillance radar, and moisture profilers are among the instruments described.
Three dimensional numerical prediction of two phase flow in industrial CFB boiler
Balzer, G.; Simonin, O.
1997-12-31
Gas-solid two phase flows are encountered in number of industrial applications such as pneumatic transport, catalytic cracking, coal combustors. The paper aims at presenting the numerical model of gas-solid flows which have been developed for several years at the Laboratoire National d`Hydraulique of Electricite de France and its application to the prediction of an industrial CFB Boiler.
Analytical and numerical models to predict the behavior of unbonded flexible risers under torsion
NASA Astrophysics Data System (ADS)
Ren, Shao-fei; Xue, Hong-xiang; Tang, Wen-yong
2016-04-01
This paper presents analytical and numerical models to predict the behavior of unbonded flexible risers under torsion. The analytical model takes local bending and torsion of tensile armor wires into consideration, and equilibrium equations of forces and displacements of layers are deduced. The numerical model includes lay angle, cross-sectional profiles of carcass, pressure armor layer and contact between layers. Abaqus/Explicit quasi-static simulation and mass scaling are adopted to avoid convergence problem and excessive computation time caused by geometric and contact nonlinearities. Results show that local bending and torsion of helical strips may have great influence on torsional stiffness, but stress related to bending and torsion is negligible; the presentation of anti-friction tapes may have great influence both on torsional stiffness and stress; hysteresis of torsion-twist relationship under cyclic loading is obtained by numerical model, which cannot be predicted by analytical model because of the ignorance of friction between layers.
NASA Astrophysics Data System (ADS)
Ko, P.; Kurosawa, S.
2014-03-01
The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.
Zhang, Jin-Feng; Chen, Yao; Lin, Guo-Shi; Zhang, Jian-Dong; Tang, Wen-Long; Huang, Jian-Huang; Chen, Jin-Shou; Wang, Xing-Fu; Lin, Zhi-Xiong
2016-06-01
Interferon-induced protein with tetratricopeptide repeat 1 (IFIT1) plays a key role in growth suppression and apoptosis promotion in cancer cells. Interferon was reported to induce the expression of IFIT1 and inhibit the expression of O-6-methylguanine-DNA methyltransferase (MGMT).This study aimed to investigate the expression of IFIT1, the correlation between IFIT1 and MGMT, and their impact on the clinical outcome in newly diagnosed glioblastoma. The expression of IFIT1 and MGMT and their correlation were investigated in the tumor tissues from 70 patients with newly diagnosed glioblastoma. The effects on progression-free survival and overall survival were evaluated. Of 70 cases, 57 (81.4%) tissue samples showed high expression of IFIT1 by immunostaining. The χ(2) test indicated that the expression of IFIT1 and MGMT was negatively correlated (r = -0.288, P = .016). Univariate and multivariate analyses confirmed high IFIT1 expression as a favorable prognostic indicator for progression-free survival (P = .005 and .017) and overall survival (P = .001 and .001), respectively. Patients with 2 favorable factors (high IFIT1 and low MGMT) had an improved prognosis as compared with others. The results demonstrated significantly increased expression of IFIT1 in newly diagnosed glioblastoma tissue. The negative correlation between IFIT1 and MGMT expression may be triggered by interferon. High IFIT1 can be a predictive biomarker of favorable clinical outcome, and IFIT1 along with MGMT more accurately predicts prognosis in newly diagnosed glioblastoma.
A New Objective Technique for Verifying Mesoscale Numerical Weather Prediction Models
NASA Technical Reports Server (NTRS)
Case, Jonathan L.; Manobianco, John; Lane, John E.; Immer, Christopher D.
2003-01-01
This report presents a new objective technique to verify predictions of the sea-breeze phenomenon over east-central Florida by the Regional Atmospheric Modeling System (RAMS) mesoscale numerical weather prediction (NWP) model. The Contour Error Map (CEM) technique identifies sea-breeze transition times in objectively-analyzed grids of observed and forecast wind, verifies the forecast sea-breeze transition times against the observed times, and computes the mean post-sea breeze wind direction and speed to compare the observed and forecast winds behind the sea-breeze front. The CEM technique is superior to traditional objective verification techniques and previously-used subjective verification methodologies because: It is automated, requiring little manual intervention, It accounts for both spatial and temporal scales and variations, It accurately identifies and verifies the sea-breeze transition times, and It provides verification contour maps and simple statistical parameters for easy interpretation. The CEM uses a parallel lowpass boxcar filter and a high-order bandpass filter to identify the sea-breeze transition times in the observed and model grid points. Once the transition times are identified, CEM fits a Gaussian histogram function to the actual histogram of transition time differences between the model and observations. The fitted parameters of the Gaussian function subsequently explain the timing bias and variance of the timing differences across the valid comparison domain. Once the transition times are all identified at each grid point, the CEM computes the mean wind direction and speed during the remainder of the day for all times and grid points after the sea-breeze transition time. The CEM technique performed quite well when compared to independent meteorological assessments of the sea-breeze transition times and results from a previously published subjective evaluation. The algorithm correctly identified a forecast or observed sea-breeze occurrence
Numerical prediction of the monsoon depression of 5-7 July 1979. [Monsoon Experiment (MONEX)
NASA Technical Reports Server (NTRS)
Shukla, J.; Atlas, R.; Baker, W. E.
1981-01-01
A well defined monsoon depression was used for two assimilation and forecast experiments: (1) using conventional surface and upper air data, (2) using these data plus Monex data. The data sets were assimilated and used with a general circulation model to make numerical predictions. The model, the analysis and assimilation procedure, the differences in the analyses due to different data inputs, and the differences in the numerical predictions are described. The MONEX data have a positive impact, although the differences after 24 hr are not significant. The MONEX assimilation does not agree with manual analysis location of depression center. The 2.5 x 3 deg horizontal resolution of the prediction model is too coarse. The assimilation of geopotential height data derived from satellite soundings generated gravity waves with amplitudes similar to the meteorologically significant features investigated.
Rorick, Amber; Michael, Matthew A; Yang, Liu; Zhang, Yong
2015-09-03
Oxygen is an important element in most biologically significant molecules, and experimental solid-state (17)O NMR studies have provided numerous useful structural probes to study these systems. However, computational predictions of solid-state (17)O NMR chemical shift tensor properties are still challenging in many cases, and in particular, each of the prior computational works is basically limited to one type of oxygen-containing system. This work provides the first systematic study of the effects of geometry refinement, method, and basis sets for metal and nonmetal elements in both geometry optimization and NMR property calculations of some biologically relevant oxygen-containing compounds with a good variety of XO bonding groups (X = H, C, N, P, and metal). The experimental range studied is of 1455 ppm, a major part of the reported (17)O NMR chemical shifts in organic and organometallic compounds. A number of computational factors toward relatively general and accurate predictions of (17)O NMR chemical shifts were studied to provide helpful and detailed suggestions for future work. For the studied kinds of oxygen-containing compounds, the best computational approach results in a theory-versus-experiment correlation coefficient (R(2)) value of 0.9880 and a mean absolute deviation of 13 ppm (1.9% of the experimental range) for isotropic NMR shifts and an R(2) value of 0.9926 for all shift-tensor properties. These results shall facilitate future computational studies of (17)O NMR chemical shifts in many biologically relevant systems, and the high accuracy may also help the refinement and determination of active-site structures of some oxygen-containing substrate-bound proteins.
Costigan, K.R.; Flicker, D.G.
1995-09-01
The South Area of Tooele Army Depot is one of the US Army`s storage facilities for its stockpile of chemical weapon agents. The Department of Defense is preparing to destroy the aging stockpiles of lethal chemical munitions, which have existed since the end of World War II. Although the danger slight, accurate predictions of the wind fields in the valley and accurate dispersion calculations are important in the event of an accident involving toxic chemicals at the depot. In order to prepare for an emergency which might involve a release of toxic agents to the atmosphere, the Higher Order Turbulence Model for Atmospheric circulations (HOTMAC) and its companion code RAndom Particle and Diffusion (RAPTAD) have been adapted for use in predicting where dangerous amounts of these chemicals may travel. Both codes have been applied to a number of air quality studies in the past, including previous dispersion studies at Tooele.
Carswell, Dave; Hilton, Andy; Chan, Chris; McBride, Diane; Croft, Nick; Slone, Avril; Cross, Mark; Foster, Graham
2013-08-01
The objective of this study was to demonstrate the potential of Computational Fluid Dynamics (CFD) simulations in predicting the levels of haemolysis in ventricular assist devices (VADs). Three different prototypes of a radial flow VAD have been examined experimentally and computationally using CFD modelling to assess device haemolysis. Numerical computations of the flow field were computed using a CFD model developed with the use of the commercial software Ansys CFX 13 and a set of custom haemolysis analysis tools. Experimental values for the Normalised Index of Haemolysis (NIH) have been calculated as 0.020 g/100 L, 0.014 g/100 L and 0.0042 g/100 L for the three designs. Numerical analysis predicts an NIH of 0.021 g/100 L, 0.017 g/100 L and 0.0057 g/100 L, respectively. The actual differences between experimental and numerical results vary between 0.0012 and 0.003 g/100 L, with a variation of 5% for Pump 1 and slightly larger percentage differences for the other pumps. The work detailed herein demonstrates how CFD simulation and, more importantly, the numerical prediction of haemolysis may be used as an effective tool in order to help the designers of VADs manage the flow paths within pumps resulting in a less haemolytic device.
Numerical predictions for planets in the debris discs of HD 202628 and HD 207129
NASA Astrophysics Data System (ADS)
Thilliez, E.; Maddison, S. T.
2016-04-01
Resolved debris disc images can exhibit a range of radial and azimuthal structures, including gaps and rings, which can result from planetary companions shaping the disc by their gravitational influence. Currently, there are no tools available to determine the architecture of potential companions from disc observations. Recent work by Rodigas, Malhotra & Hinz presents how one can estimate the maximum mass and minimum semimajor axis of a hidden planet empirically from the width of the disc in scattered light. In this work, we use the predictions of Rodigas et al. applied to two debris discs HD 202628 and HD 207129. We aim to test if the predicted orbits of the planets can explain the features of their debris disc, such as eccentricity and sharp inner edge. We first run dynamical simulations using the predicted planetary parameters of Rodigas et al., and then numerically search for better parameters. Using a modified N-body code including radiation forces, we perform simulations over a broad range of planet parameters and compare synthetics images from our simulations to the observations. We find that the observational features of HD 202628 can be reproduced with a planet five times smaller than expected, located 30 AU beyond the predicted value, while the best match for HD 207129 is for a planet located 5-10 AU beyond the predicted location with a smaller eccentricity. We conclude that the predictions of Rodigas et al. provide a good starting point but should be complemented by numerical simulations.
NASA Astrophysics Data System (ADS)
Tetrault, Philippe-Andre
2000-10-01
In transonic flow, the aerodynamic interference that occurs on a strut-braced wing airplane, pylons, and other applications is significant. The purpose of this work is to provide relationships to estimate the interference drag of wing-strut, wing-pylon, and wing-body arrangements. Those equations are obtained by fitting a curve to the results obtained from numerous Computational Fluid Dynamics (CFD) calculations using state-of-the-art codes that employ the Spalart-Allmaras turbulence model. In order to estimate the effect of the strut thickness, the Reynolds number of the flow, and the angle made by the strut with an adjacent surface, inviscid and viscous calculations are performed on a symmetrical strut at an angle between parallel walls. The computations are conducted at a Mach number of 0.85 and Reynolds numbers of 5.3 and 10.6 million based on the strut chord. The interference drag is calculated as the drag increment of the arrangement compared to an equivalent two-dimensional strut of the same cross-section. The results show a rapid increase of the interference drag as the angle of the strut deviates from a position perpendicular to the wall. Separation regions appear for low intersection angles, but the viscosity generally provides a positive effect in alleviating the strength of the shock near the junction and thus the drag penalty. When the thickness-to-chord ratio of the strut is reduced, the flowfield is disturbed only locally at the intersection of the strut with the wall. This study provides an equation to estimate the interference drag of simple intersections in transonic flow. In the course of performing the calculations associated with this work, an unstructured flow solver was utilized. Accurate drag prediction requires a very fine grid and this leads to problems associated with the grid generator. Several challenges facing the unstructured grid methodology are discussed: slivers, grid refinement near the leading edge and at the trailing edge, grid
ERIC Educational Resources Information Center
Salley, Charles D.
Accurate enrollment forecasts are a prerequisite for reliable budget projections. This is because tuition payments make up a significant portion of a university's revenue, and anticipated revenue is the immediate constraint on current operating expenditures. Accurate forecasts are even more critical to revenue projections when a university's…
Thorndahl, Søren; Nielsen, Jesper Ellerbæk; Jensen, David Getreuer
2016-12-01
Flooding produced by high-intensive local rainfall and drainage system capacity exceedance can have severe impacts in cities. In order to prepare cities for these types of flood events - especially in the future climate - it is valuable to be able to simulate these events numerically, both historically and in real-time. There is a rather untested potential in real-time prediction of urban floods. In this paper, radar data observations with different spatial and temporal resolution, radar nowcasts of 0-2 h leadtime, and numerical weather models with leadtimes up to 24 h are used as inputs to an integrated flood and drainage systems model in order to investigate the relative difference between different inputs in predicting future floods. The system is tested on the small town of Lystrup in Denmark, which was flooded in 2012 and 2014. Results show it is possible to generate detailed flood maps in real-time with high resolution radar rainfall data, but rather limited forecast performance in predicting floods with leadtimes more than half an hour.
vom Saal, Frederick S; Welshons, Wade V
2014-12-01
There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources.
vom Saal, Frederick S.; Welshons, Wade V.
2016-01-01
There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources. PMID:25304273
Mixing of a point-source indoor pollutant: Numerical predictions and comparison with experiments
Lobscheid, C.; Gadgil, A.J.
2002-01-01
In most practical estimates of indoor pollutant exposures, it is common to assume that the pollutant is uniformly and instantaneously mixed in the indoor space. It is also commonly known that this assumption is simplistic, particularly for point sources, and for short-term or localized indoor exposures. We report computational fluid dynamics (CFD) predictions of mixing time of a point-pulse release of a pollutant in an unventilated mechanically mixed isothermal room. We aimed to determine the adequacy of the standard RANS two-equation ({kappa}-{var_epsilon}) turbulence model to predict the mixing times under these conditions. The predictions were made for the twelve mixing time experiments performed by Drescher et al. (1995). We paid attention to adequate grid resolution, suppression of numerical diffusion, and careful simulation of the mechanical blowers used in the experiments. We found that the predictions are in good agreement with experimental measurements.
Operational numerical weather prediction on the CYBER 205 at the National Meteorological Center
NASA Technical Reports Server (NTRS)
Deaven, D.
1984-01-01
The Development Division of the National Meteorological Center (NMC), having the responsibility of maintaining and developing the numerical weather forecasting systems of the center, is discussed. Because of the mission of NMC data products must be produced reliably and on time twice daily free of surprises for forecasters. Personnel of Development Division are in a rather unique situation. They must develop new advanced techniques for numerical analysis and prediction utilizing current state-of-the-art techniques, and implement them in an operational fashion without damaging the operations of the center. With the computational speeds and resources now available from the CYBER 205, Development Division Personnel will be able to introduce advanced analysis and prediction techniques into the operational job suite without disrupting the daily schedule. The capabilities of the CYBER 205 are discussed.
One-level prediction-A numerical method for estimating undiscovered metal endowment
McCammon, R.B.; Kork, J.O.
1992-01-01
One-level prediction has been developed as a numerical method for estimating undiscovered metal endowment within large areas. The method is based on a presumed relationship between a numerical measure of geologic favorability and the spatial distribution of metal endowment. Metal endowment within an unexplored area for which the favorability measure is greater than a favorability threshold level is estimated to be proportional to the area of that unexplored portion. The constant of proportionality is the ratio of the discovered endowment found within a suitably chosen control region, which has been explored, to the area of that explored region. In addition to the estimate of undiscovered endowment, a measure of the error of the estimate is also calculated. One-level prediction has been used to estimate the undiscovered uranium endowment in the San Juan basin, New Mexico, U.S.A. A subroutine to perform the necessary calculations is included. ?? 1992 Oxford University Press.
NASA Astrophysics Data System (ADS)
Boyko, Oleksiy; Zheleznyak, Mark
2015-04-01
The original numerical code TOPKAPI-IMMS of the distributed rainfall-runoff model TOPKAPI ( Todini et al, 1996-2014) is developed and implemented in Ukraine. The parallel version of the code has been developed recently to be used on multiprocessors systems - multicore/processors PC and clusters. Algorithm is based on binary-tree decomposition of the watershed for the balancing of the amount of computation for all processors/cores. Message passing interface (MPI) protocol is used as a parallel computing framework. The numerical efficiency of the parallelization algorithms is demonstrated for the case studies for the flood predictions of the mountain watersheds of the Ukrainian Carpathian regions. The modeling results is compared with the predictions based on the lumped parameters models.
Numerical prediction of a turbulent evaporating fuel spray in a recirculating flow
NASA Astrophysics Data System (ADS)
Chen, Xi-Qing; Pereira, Fernandes
1994-03-01
A comprehensive spray evaporation model, based on a Eulerian model of the gas field and a Lagrangian model of the droplet field in conjunction with the stochastic description of gas turbulence effect on the droplet motion, is applied to a turbulent evaporating spray in a recirculating flow and validated by comparison between predictions and measurements. Unlike many previous numerical predictions this note has been able to avoid the usual problem of a lack of detailed initial droplet-size and velocity-distribution conditions, and incorporated the turbulent temporal and directional correlation. We have adopted Zhou and Leschziner's methodology to include turbulent temporal and directional correlations in the numerical modeling, which has proved to be an improvement over the conventional particle-eddy modeling in simple flows.
On the horizontal resolution of fronts in numerical weather prediction models
NASA Technical Reports Server (NTRS)
Reeder, Michael J.; Smith, Roger K.
1988-01-01
A two-dimensional model is used to study the ability of current numerical weather prediction models to capture frontogenesis and to determine frontal motion. Particular attention is given to the ability of a simulation with a very coarse grid to represent the dynamics of a frontogenetically-active model cold front in a simulation with a relatively fine grid. A resolution between 50 and 100 km is satisfactory for capturing frontal scale motions.
NASA Astrophysics Data System (ADS)
Yang, J.; Astitha, M.; Anagnostou, E. N.; Hartman, B.; Kallos, G. B.
2015-12-01
Weather prediction accuracy has become very important for the Northeast U.S. given the devastating effects of extreme weather events in the recent years. Weather forecasting systems are used towards building strategies to prevent catastrophic losses for human lives and the environment. Concurrently, weather forecast tools and techniques have evolved with improved forecast skill as numerical prediction techniques are strengthened by increased super-computing resources. In this study, we examine the combination of two state-of-the-science atmospheric models (WRF and RAMS/ICLAMS) by utilizing a Bayesian regression approach to improve the prediction of extreme weather events for NE U.S. The basic concept behind the Bayesian regression approach is to take advantage of the strengths of two atmospheric modeling systems and, similar to the multi-model ensemble approach, limit their weaknesses which are related to systematic and random errors in the numerical prediction of physical processes. The first part of this study is focused on retrospective simulations of seventeen storms that affected the region in the period 2004-2013. Optimal variances are estimated by minimizing the root mean square error and are applied to out-of-sample weather events. The applicability and usefulness of this approach are demonstrated by conducting an error analysis based on in-situ observations from meteorological stations of the National Weather Service (NWS) for wind speed and wind direction, and NCEP Stage IV radar data, mosaicked from the regional multi-sensor for precipitation. The preliminary results indicate a significant improvement in the statistical metrics of the modeled-observed pairs for meteorological variables using various combinations of the sixteen events as predictors of the seventeenth. This presentation will illustrate the implemented methodology and the obtained results for wind speed, wind direction and precipitation, as well as set the research steps that will be
Analytic Formulation and Numerical Implementation of an Acoustic Pressure Gradient Prediction
NASA Technical Reports Server (NTRS)
Lee, Seongkyu; Brentner, Kenneth S.; Farassat, F.; Morris, Philip J.
2008-01-01
Two new analytical formulations of the acoustic pressure gradient have been developed and implemented in the PSU-WOPWOP rotor noise prediction code. The pressure gradient can be used to solve the boundary condition for scattering problems and it is a key aspect to solve acoustic scattering problems. The first formulation is derived from the gradient of the Ffowcs Williams-Hawkings (FW-H) equation. This formulation has a form involving the observer time differentiation outside the integrals. In the second formulation, the time differentiation is taken inside the integrals analytically. This formulation avoids the numerical time differentiation with respect to the observer time, which is computationally more efficient. The acoustic pressure gradient predicted by these new formulations is validated through comparison with available exact solutions for a stationary and moving monopole sources. The agreement between the predictions and exact solutions is excellent. The formulations are applied to the rotor noise problems for two model rotors. A purely numerical approach is compared with the analytical formulations. The agreement between the analytical formulations and the numerical method is excellent for both stationary and moving observer cases.
NASA Astrophysics Data System (ADS)
Schiavon, Ricardo P.
2007-07-01
We present a new set of model predictions for 16 Lick absorption line indices from Hδ through Fe5335 and UBV colors for single stellar populations with ages ranging between 1 and 15 Gyr, [Fe/H] ranging from -1.3 to +0.3, and variable abundance ratios. The models are based on accurate stellar parameters for the Jones library stars and a new set of fitting functions describing the behavior of line indices as a function of effective temperature, surface gravity, and iron abundance. The abundances of several key elements in the library stars have been obtained from the literature in order to characterize the abundance pattern of the stellar library, thus allowing us to produce model predictions for any set of abundance ratios desired. We develop a method to estimate mean ages and abundances of iron, carbon, nitrogen, magnesium, and calcium that explores the sensitivity of the various indices modeled to those parameters. The models are compared to high-S/N data for Galactic clusters spanning the range of ages, metallicities, and abundance patterns of interest. Essentially all line indices are matched when the known cluster parameters are adopted as input. Comparing the models to high-quality data for galaxies in the nearby universe, we reproduce previous results regarding the enhancement of light elements and the spread in the mean luminosity-weighted ages of early-type galaxies. When the results from the analysis of blue and red indices are contrasted, we find good consistency in the [Fe/H] that is inferred from different Fe indices. Applying our method to estimate mean ages and abundances from stacked SDSS spectra of early-type galaxies brighter than L*, we find mean luminosity-weighed ages of the order of ~8 Gyr and iron abundances slightly below solar. Abundance ratios, [X/Fe], tend to be higher than solar and are positively correlated with galaxy luminosity. Of all elements, nitrogen is the more strongly correlated with galaxy luminosity, which seems to indicate
NASA Astrophysics Data System (ADS)
Chiumenti, M.; Cervera, M.; Agelet de Saracibar, C.; Dialami, N.
2013-05-01
In this work a novel finite element technology based on a three-field mixed formulation is presented. The Variational Multi Scale (VMS) method is used to circumvent the LBB stability condition allowing the use of linear piece-wise interpolations for displacement, stress and pressure fields, respectively. The result is an enhanced stress field approximation which enables for stress-accurate results in nonlinear computational mechanics. The use of an independent nodal variable for the pressure field allows for an adhoc treatment of the incompressibility constraint. This is a mandatory requirement due to the isochoric nature of the plastic strain in metal forming processes. The highly non-linear stress field typically encountered in the Friction Stir Welding (FSW) process is used as an example to show the performance of this new FE technology. The numerical simulation of the FSW process is tackled by means of an Arbitrary-Lagrangian-Eulerian (ALE) formulation. The computational domain is split into three different zones: the work.piece (defined by a rigid visco-plastic behaviour in the Eulerian framework), the pin (within the Lagrangian framework) and finally the stirzone (ALE formulation). A fully coupled thermo-mechanical analysis is introduced showing the heat fluxes generated by the plastic dissipation in the stir-zone (Sheppard rigid-viscoplastic constitutive model) as well as the frictional dissipation at the contact interface (Norton frictional contact model). Finally, tracers have been implemented to show the material flow around the pin allowing a better understanding of the welding mechanism. Numerical results are compared with experimental evidence.
ERIC Educational Resources Information Center
Lin, Jing-Wen
2016-01-01
Holding scientific conceptions and having the ability to accurately predict students' preconceptions are a prerequisite for science teachers to design appropriate constructivist-oriented learning experiences. This study explored the types and sources of students' preconceptions of electric circuits. First, 438 grade 3 (9 years old) students were…
Numerical prediction of vortex cores of the leading and trailing edges of delta wings
NASA Technical Reports Server (NTRS)
Kandil, O. A.
1980-01-01
The purpose of the present paper is to predict the roll-up of the vortex sheets emanating from the leading- and trailing-edges of delta wings with emphasis on the interaction of vortex cores beyond the trailing edge. The motivation behind the present work is the recent experimental data published by Hummel. The Nonlinear Discrete-Vortex method (NDV-method) is modified and extended to predict the leading- and trailing-vortex cores beyond the trailing edge. The present model alleviates the problems previously encountered in predicting satisfactory pressure distributions. This is accomplished by lumping the free-vortex lines during the iteration procedure. The leading- and trailing-edge cores and their feeding sheets are obtained as parts of the solution. The numerical results show that the NDV-method is successful in confirming the formation of a trailing-edge core with opposite circulation and opposite roll-up to those of the leading-edge core. This work is a breakthrough in the high angle of attack aerodynamics and moreover, it is the first numerical prediction done on this problem
NASA Astrophysics Data System (ADS)
Okabe, Tomonaga; Yashiro, Shigeki
This study proposes the cohesive zone model (CZM) for predicting fatigue damage growth in notched carbon-fiber-reinforced composite plastic (CFRP) cross-ply laminates. In this model, damage growth in the fracture process of cohesive elements due to cyclic loading is represented by the conventional damage mechanics model. We preliminarily investigated whether this model can appropriately express fatigue damage growth for a circular crack embedded in isotropic solid material. This investigation demonstrated that this model could reproduce the results with the well-established fracture mechanics model plus the Paris' law by tuning adjustable parameters. We then numerically investigated the damage process in notched CFRP cross-ply laminates under tensile cyclic loading and compared the predicted damage patterns with those in experiments reported by Spearing et al. (Compos. Sci. Technol. 1992). The predicted damage patterns agreed with the experiment results, which exhibited the extension of multiple types of damage (e.g., splits, transverse cracks and delaminations) near the notches.
Three dimensional numerical prediction of icing related power and energy losses on a wind turbine
NASA Astrophysics Data System (ADS)
Sagol, Ece
, while the latter performs all the steps in the 3D domain. The Fully-3D method yields more accurate predictions for a clean blade. For icing conditions, a validation is not possible, owing to the lack of experimental data. However, the two methods produce quite different results for the performance of the ice shape and the iced blade. A critical analysis of the results shows that, although the computational cost of the Fully-3D method is much higher, icing analyses in 2D may lack accuracy, because the ice shape and the related power loss are compromised by not considering the 3D features of rotational flow. While performing the CFD computations on the iced blade, the rough surface of the ice is smoothed to a degree, in order to prevent numerical instability and to keep the mesh size within a reasonable limit. However, roughness effects cannot be excluded altogether, as they contribute significantly to performance reduction. We consider roughness through a modification in the CFD code, and assess its effect on performance for the clean blade.
NASA Astrophysics Data System (ADS)
Peterson, D. A.; Wang, J.; Hyer, E. J.; Ichoku, C. M.
2012-12-01
Smoke emissions estimates used in air quality and visibility forecasting applications are currently limited by the information content of satellite fire observations, and the lack of a skillful short-term forecast of changes in fire activity. This study explores the potential benefits of a recently developed sub-pixel-based calculation of fire radiative power (FRPf) from the MODerate Resolution Imaging Spectroradiometer (MODIS), which provides more precise estimates of the radiant energy (over the retrieved fire area) that in turn, improves estimates of the thermal buoyancy of smoke plumes and may be helpful characterizing the meteorological effects on fire activity for large fire events. Results show that unlike the current FRP product, the incorporation of FRPf produces a statistically significant correlation (R = 0.42) with smoke plume height data provided by the Multi-angle Imaging SpectroRadiometer (MISR) and several meteorological variables, such as surface wind speed and temperature, which may be useful for discerning cases where smoke was injected above the boundary layer. Drawing from recent advances in numerical weather prediction (NWP), this study also examines the meteorological conditions characteristic of fire ignition, growth, decay, and extinction, which are used to develop an automated, 24-hour prediction of satellite fire activity. Satellite fire observations from MODIS and geostationary sensors show that the fire prediction model is an improvement (RMSE reduction of 13 - 20%) over the forecast of persistence commonly used by near-real-time fire emission inventories. The ultimate goal is to combine NWP data and satellite fire observations to improve both analysis and prediction of biomass-burning emissions, through improved understanding of the interactions between fire activity and weather at scales appropriate for operational modeling. This is a critical step toward producing a global fire prediction model and improving operational forecasts of
Numerical predictions and experimental results of a dry bay fire environment.
Suo-Anttila, Jill Marie; Gill, Walter; Black, Amalia Rebecca
2003-11-01
The primary objective of the Safety and Survivability of Aircraft Initiative is to improve the safety and survivability of systems by using validated computational models to predict the hazard posed by a fire. To meet this need, computational model predictions and experimental data have been obtained to provide insight into the thermal environment inside an aircraft dry bay. The calculations were performed using the Vulcan fire code, and the experiments were completed using a specially designed full-scale fixture. The focus of this report is to present comparisons of the Vulcan results with experimental data for a selected test scenario and to assess the capability of the Vulcan fire field model to accurately predict dry bay fire scenarios. Also included is an assessment of the sensitivity of the fire model predictions to boundary condition distribution and grid resolution. To facilitate the comparison with experimental results, a brief description of the dry bay fire test fixture and a detailed specification of the geometry and boundary conditions are included. Overall, the Vulcan fire field model has shown the capability to predict the thermal hazard posed by a sustained pool fire within a dry bay compartment of an aircraft; although, more extensive experimental data and rigorous comparison are required for model validation.
NASA Astrophysics Data System (ADS)
Kavetski, D.; Clark, M. P.; Fenicia, F.
2011-12-01
Hydrologists often face sources of uncertainty that dwarf those normally encountered in many engineering and scientific disciplines. Especially when representing large scale integrated systems, internal heterogeneities such as stream networks, preferential flowpaths, vegetation, etc, are necessarily represented with a considerable degree of lumping. The inputs to these models are themselves often the products of sparse observational networks. Given the simplifications inherent in environmental models, especially lumped conceptual models, does it really matter how they are implemented? At the same time, given the complexities usually found in the response surfaces of hydrological models, increasingly sophisticated analysis methodologies are being proposed for sensitivity analysis, parameter calibration and uncertainty assessment. Quite remarkably, rather than being caused by the model structure/equations themselves, in many cases model analysis complexities are consequences of seemingly trivial aspects of the model implementation - often, literally, whether the start-of-step or end-of-step fluxes are used! The extent of problems can be staggering, including (i) degraded performance of parameter optimization and uncertainty analysis algorithms, (ii) erroneous and/or misleading conclusions of sensitivity analysis, parameter inference and model interpretations and, finally, (iii) poor reliability of a calibrated model in predictive applications. While the often nontrivial behavior of numerical approximations has long been recognized in applied mathematics and in physically-oriented fields of environmental sciences, it remains a problematic issue in many environmental modeling applications. Perhaps detailed attention to numerics is only warranted for complicated engineering models? Would not numerical errors be an insignificant component of total uncertainty when typical data and model approximations are present? Is this really a serious issue beyond some rare isolated
Post audit of a numerical prediction of wellfield drawdown in a semiconfined aquifer system
Stewart, M.; Langevin, C.
1999-01-01
A numerical ground water flow model was created in 1978 and revised in 1981 to predict the drawdown effects of a proposed municipal wellfield permitted to withdraw 30 million gallons per day (mgd; 1.1 x 105 m3/day) of water from the semiconfined Floridan Aquifer system. The predictions are based on the assumption that water levels in the semiconfined Floridan Aquifer reach a long-term, steady-state condition within a few days of initiation of pumping. Using this assumption, a 75 day simulation without water table recharge, pumping at the maximum permitted rates, was considered to represent a worst-case condition and the greatest drawdowns that could be experienced during wellfield operation. This method of predicting wellfield effects was accepted by the permitting agency. For this post audit, observed drawdowns were derived by taking the difference between pre-pumping and post-pumping potentiometric surface levels. Comparison of predicted and observed drawdowns suggests that actual drawdown over a 12 year period exceeds predicted drawdown by a factor of two or more. Analysis of the source of error in the 1981 predictions suggests that the values used for transmissivity, storativity, specific yield, and leakance are reasonable at the wellfield scale. Simulation using actual 1980-1992 pumping rates improves the agreement between predicted and observed drawdowns. The principal source of error is the assumption that water levels in a semiconfined aquifer achieve a steady-state condition after a few days or weeks of pumping. Simulations using a version of the 1981 model modified to include recharge and evapotranspiration suggest that it can take hundreds of days or several years for water levels in the linked Surficial and Floridan Aquifers to reach an apparent steady-state condition, and that slow declines in levels continue for years after the initiation of pumping. While the 1981 'impact' model can be used for reasonably predicting short-term, wellfield
Analytic Formulation and Numerical Implementation of an Acoustic Pressure Gradient Prediction
NASA Technical Reports Server (NTRS)
Lee, Seongkyu; Brentner, Kenneth S.; Farassat, Fereidoun
2007-01-01
The scattering of rotor noise is an area that has received little attention over the years, yet the limited work that has been done has shown that both the directivity and intensity of the acoustic field may be significantly modified by the presence of scattering bodies. One of the inputs needed to compute the scattered acoustic field is the acoustic pressure gradient on a scattering surface. Two new analytical formulations of the acoustic pressure gradient have been developed and implemented in the PSU-WOPWOP rotor noise prediction code. These formulations are presented in this paper. The first formulation is derived by taking the gradient of Farassat's retarded-time Formulation 1A. Although this formulation is relatively simple, it requires numerical time differentiation of the acoustic integrals. In the second formulation, the time differentiation is taken inside the integrals analytically. The acoustic pressure gradient predicted by these new formulations is validated through comparison with the acoustic pressure gradient determined by a purely numerical approach for two model rotors. The agreement between analytic formulations and numerical method is excellent for both stationary and moving observers case.
Near-fault earthquake ground motion prediction by a high-performance spectral element numerical code
NASA Astrophysics Data System (ADS)
Paolucci, Roberto; Stupazzini, Marco
2008-07-01
Near-fault effects have been widely recognised to produce specific features of earthquake ground motion, that cannot be reliably predicted by 1D seismic wave propagation modelling, used as a standard in engineering applications. These features may have a relevant impact on the structural response, especially in the nonlinear range, that is hard to predict and to be put in a design format, due to the scarcity of significant earthquake records and of reliable numerical simulations. In this contribution a pilot study is presented for the evaluation of seismic ground-motions in the near-fault region, based on a high-performance numerical code for 3D seismic wave propagation analyses, including the seismic fault, the wave propagation path and the near-surface geological or topographical irregularity. For this purpose, the software package GeoELSE is adopted, based on the spectral element method. The set-up of the numerical benchmark of 3D ground motion simulation in the valley of Grenoble (French Alps) is chosen to study the effect of the complex interaction between basin geometry and radiation mechanism on the variability of earthquake ground motion.
Near-fault earthquake ground motion prediction by a high-performance spectral element numerical code
Paolucci, Roberto; Stupazzini, Marco
2008-07-08
Near-fault effects have been widely recognised to produce specific features of earthquake ground motion, that cannot be reliably predicted by 1D seismic wave propagation modelling, used as a standard in engineering applications. These features may have a relevant impact on the structural response, especially in the nonlinear range, that is hard to predict and to be put in a design format, due to the scarcity of significant earthquake records and of reliable numerical simulations. In this contribution a pilot study is presented for the evaluation of seismic ground-motions in the near-fault region, based on a high-performance numerical code for 3D seismic wave propagation analyses, including the seismic fault, the wave propagation path and the near-surface geological or topographical irregularity. For this purpose, the software package GeoELSE is adopted, based on the spectral element method. The set-up of the numerical benchmark of 3D ground motion simulation in the valley of Grenoble (French Alps) is chosen to study the effect of the complex interaction between basin geometry and radiation mechanism on the variability of earthquake ground motion.
NASA Astrophysics Data System (ADS)
Tirupathi, S.; Schiemenz, A. R.; Liang, Y.; Parmentier, E.; Hesthaven, J.
2013-12-01
The style and mode of melt migration in the mantle are important to the interpretation of basalts erupted on the surface. Both grain-scale diffuse porous flow and channelized melt migration have been proposed. To better understand the mechanisms and consequences of melt migration in a heterogeneous mantle, we have undertaken a numerical study of reactive dissolution in an upwelling and viscously deformable mantle where solubility of pyroxene increases upwards. Our setup is similar to that described in [1], except we use a larger domain size in 2D and 3D and a new numerical method. To enable efficient simulations in 3D through parallel computing, we developed a high-order accurate numerical method for the magma dynamics problem using discontinuous Galerkin methods and constructed the problem using the numerical library deal.II [2]. Linear stability analyses of the reactive dissolution problem reveal three dynamically distinct regimes [3] and the simulations reported in this study were run in the stable regime and the unstable wave regime where small perturbations in porosity grows periodically. The wave regime is more relevant to melt migration beneath the mid-ocean ridges but computationally more challenging. Extending the 2D simulations in the stable regime in [1] to 3D using various combinations of sustained perturbations in porosity at the base of the upwelling column (which may result from a viened mantle), we show the geometry and distribution of dunite channel and high-porosity melt channels are highly correlated with inflow perturbation through superposition. Strong nonlinear interactions among compaction, dissolution, and upwelling give rise to porosity waves and high-porosity melt channels in the wave regime. These compaction-dissolution waves have well organized but time-dependent structures in the lower part of the simulation domain. High-porosity melt channels nucleate along nodal lines of the porosity waves, growing downwards. The wavelength scales
Numerical Prediction of Wave Forces on a Breakwater under Tsunami Loading
NASA Astrophysics Data System (ADS)
Brucker, Kyle A.; Oshnack, Mary Beth; O'Shea, Thomas T.; Cox, Dan; Dommermuth, Douglas G.
2010-11-01
Numerical Flow Analysis (NFA) predictions of wave propagation and wave- impact loading are compared to the Oregon State University (OSU) O.H. Hinsdale Wave Research Laboratories Tsunami experiments (Oshnack, et al. 2009). The simulations were designed to replicate the experiments such that a soliton is sent down a wave flume, runs up a small beach, and impacts with a breakwater. The soliton is 1.2m high in a water depth of 2.29m and travels over 61m before hitting the breakwater. The NFA predictions are compared to laboratory measurements of a) free-surface elevation at several locations down the flume and b) impact pressure at the base of the breakwater. The free-surface elevations as predicted by NFA are in excellent agreement with the experimental measurements. This shows that NFA can simulate the propagation of waves over long distances with minimal amplitude and dispersion errors. Pressures that are induced by the jet are important because in certain coastal areas buildings must be designed to sustain Tsunami loads. The pressure predictions over the duration of breaking agree very well with laboratory measurements. The peak pressures predicted by NFA are in excellent agreement with experiments.
Numerical Prediction of the Dimensioning of Tools for the Extrusion Process of Rubber Profiles
NASA Astrophysics Data System (ADS)
Müllner, Herbert W.; Wieczorek, André; Eberhardsteiner, Josef
2007-04-01
In this contribution numerical simulations of realistic extrusion tools will be presented. Generally, the geometry of the desired rubber profile cannot be used for the dimensioning of the tool. The shape of the corresponding tool under consideration of the material behavior needs to be predicted. Therefore, simulations were performed with the finite element based CFD program POLYFLOW under usage of an inverse calculation approach. The underlying material parameters will be provided by a material characterization which is based on capillary-experiments in combination with extrudate swell measurements. Thus, more realistic simulations of the extrudate swell phenomenon and its influence on the resulting profile geometry are possible. The experimental validation of the new characterization method will be done by means of numerical simulations of the capillary-experiment.
Dragna, Didier; Blanc-Benon, Philippe; Poisson, Franck
2014-03-01
Results from outdoor acoustic measurements performed in a railway site near Reims in France in May 2010 are compared to those obtained from a finite-difference time-domain solver of the linearized Euler equations. During the experiments, the ground profile and the different ground surface impedances were determined. Meteorological measurements were also performed to deduce mean vertical profiles of wind and temperature. An alarm pistol was used as a source of impulse signals and three microphones were located along a propagation path. The various measured parameters are introduced as input data into the numerical solver. In the frequency domain, the numerical results are in good accordance with the measurements up to a frequency of 2 kHz. In the time domain, except a time shift, the predicted waveforms match the measured waveforms with a close agreement.
Short-periodic variations and second-order numerical averaging. [for orbit prediction problems
NASA Technical Reports Server (NTRS)
Lutzky, D.; Uphoff, C.
1975-01-01
The principal disadvantage of the method of numerical averaging is that it provides only the average time history of the orbital elements and yields no information about the high-frequency or short-periodic variations that occur inside the averaging interval. This paper contains a description of a technique for recovering the short-periodic variations by minor modifications to the averaging process so as to permit the construction of a Fourier series for the osculating elements. The availability of this series permits the extension of the averaging technique to higher order and allows us to account for short-periodic coupling of the perturbations. Comparisons of the results with numerically integrated solutions are presented for three distinct orbit prediction problems.
NASA Astrophysics Data System (ADS)
Guo, Bingjie; Bitner-Gregersen, Elzbieta Maria; Sun, Hui; Block Helmers, Jens
2013-04-01
Earlier investigations have indicated that proper prediction of nonlinear loads and responses due to nonlinear waves is important for ship safety in extreme seas. However, the nonlinear loads and responses in extreme seas have not been sufficiently investigated yet, particularly when rogue waves are considered. A question remains whether the existing linear codes can predict nonlinear loads and responses with a satisfactory accuracy and how large the deviations from linear predictions are. To indicate it response statistics have been studied based on the model tests carried out with a LNG tanker in the towing tank of the Technical University of Berlin (TUB), and compared with the statistics derived from numerical simulations using the DNV code WASIM. It is a potential code for wave-ship interaction based on 3D Panel method, which can perform both linear and nonlinear simulation. The numerical simulations with WASIM and the model tests in extreme and rogue waves have been performed. The analysis of ship motions (heave and pitch) and bending moments, in both regular and irregular waves, is performed. The results from the linear and nonlinear simulations are compared with experimental data to indicate the impact of wave non-linearity on loads and response calculations when the code based on the Rankine Panel Method is used. The study shows that nonlinearities may have significant effect on extreme motions and bending moment generated by strongly nonlinear waves. The effect of water depth on ship responses is also demonstrated using numerical simulations. Uncertainties related to the results are discussed, giving particular attention to sampling variability.
NASA Technical Reports Server (NTRS)
Ahmed, S.; Tannehill, J. C.
1990-01-01
A new nonequilibrium turbulence closure model has been developed for computing wall bounded two-dimensional turbulent flows. This two-layer eddy viscosity model was motivated by the success of the Johnson-King model in separated flow regions. The influence of history effects are described by an ordinary differential equation developed from the turbulent kinetic energy equation. The performance of the present model has been evaluated by solving the flow around three airfoils using the Reynolds time-averaged Navier-Stokes equations. Excellent results were obtained for both attached and separated turbulent flows about the NACA 0012 airfoil, the RAE 2822 airfoil, and the Integrated Technology A 153W airfoil. Based on the comparison of the numerical solutions with the available experimental data, it is concluded that the new nonequilibrium turbulence model accurately captures the history effects of convection and diffusion on turbulence.
Li, Liqi; Cui, Xiang; Yu, Sanjiu; Zhang, Yuan; Luo, Zhong; Yang, Hua; Zhou, Yue; Zheng, Xiaoqi
2014-01-01
Protein structure prediction is critical to functional annotation of the massively accumulated biological sequences, which prompts an imperative need for the development of high-throughput technologies. As a first and key step in protein structure prediction, protein structural class prediction becomes an increasingly challenging task. Amongst most homological-based approaches, the accuracies of protein structural class prediction are sufficiently high for high similarity datasets, but still far from being satisfactory for low similarity datasets, i.e., below 40% in pairwise sequence similarity. Therefore, we present a novel method for accurate and reliable protein structural class prediction for both high and low similarity datasets. This method is based on Support Vector Machine (SVM) in conjunction with integrated features from position-specific score matrix (PSSM), PROFEAT and Gene Ontology (GO). A feature selection approach, SVM-RFE, is also used to rank the integrated feature vectors through recursively removing the feature with the lowest ranking score. The definitive top features selected by SVM-RFE are input into the SVM engines to predict the structural class of a query protein. To validate our method, jackknife tests were applied to seven widely used benchmark datasets, reaching overall accuracies between 84.61% and 99.79%, which are significantly higher than those achieved by state-of-the-art tools. These results suggest that our method could serve as an accurate and cost-effective alternative to existing methods in protein structural classification, especially for low similarity datasets.
A lateral boundary formulation for multi-level prediction models. [numerical weather forecasting
NASA Technical Reports Server (NTRS)
Davies, H. C.
1976-01-01
A method is proposed for treating the lateral boundaries of a limited-area weather prediction model. The method involves the relaxation of the interior flow in the vicinity of the boundary to the external fully prescribed flow. Analytical and numerical results obtained with a linearized multilevel model confirm the effectiveness of this computationally effective method. The method is shown to give an adequate representation of outgoing gravity waves with and without an ambient shear flow and to allow the substantially undistorted transmission of geostrophically balanced flow out of the interior of the limited domain.
NASA Technical Reports Server (NTRS)
Davies, H. C.; Turner, R. E.
1977-01-01
A dynamical relaxation technique for updating prediction models is analyzed with the help of the linear and nonlinear barotropic primitive equations. It is assumed that a complete four-dimensional time history of some prescribed subset of the meteorological variables is known. The rate of adaptation of the flow variables toward the true state is determined for a linearized f-model, and for mid-latitude and equatorial beta-plane models. The results of the analysis are corroborated by numerical experiments with the nonlinear shallow-water equations.
NASA Technical Reports Server (NTRS)
Tuccillo, J. J.
1984-01-01
Numerical Weather Prediction (NWP), for both operational and research purposes, requires only fast computational speed but also large memory. A technique for solving the Primitive Equations for atmospheric motion on the CYBER 205, as implemented in the Mesoscale Atmospheric Simulation System, which is fully vectorized and requires substantially less memory than other techniques such as the Leapfrog or Adams-Bashforth Schemes is discussed. The technique presented uses the Euler-Backard time marching scheme. Also discussed are several techniques for reducing computational time of the model by replacing slow intrinsic routines by faster algorithms which use only hardware vector instructions.
NASA Technical Reports Server (NTRS)
Wahba, Grace; Deepak, A. (Editor)
1988-01-01
The problem of merging direct and remotely sensed (indirect) data with forecast data to get an estimate of the present state of the atmosphere for the purpose of numerical weather prediction is examined. To carry out this merging optimally, it is necessary to provide an estimate of the relative weights to be given to the observations and forecast. It is possible to do this dynamically from the information to be merged, if the correlation structure of the errors from the various sources is sufficiently different. Some new statistical approaches to doing this are described, and conditions quantified in which such estimates are likely to be good.
Numerical prediction of a draft tube flow taking into account uncertain inlet conditions
NASA Astrophysics Data System (ADS)
Brugiere, O.; Balarac, G.; Corre, C.; Metais, O.; Flores, E.; Pleroy
2012-11-01
The swirling turbulent flow in a hydroturbine draft tube is computed with a non-intrusive uncertainty quantification (UQ) method coupled to Reynolds-Averaged Navier-Stokes (RANS) modelling in order to take into account in the numerical prediction the physical uncertainties existing on the inlet flow conditions. The proposed approach yields not only mean velocity fields to be compared with measured profiles, as is customary in Computational Fluid Dynamics (CFD) practice, but also variance of these quantities from which error bars can be deduced on the computed profiles, thus making more significant the comparison between experiment and computation.
NASA Astrophysics Data System (ADS)
Coulier, P.; Lombaert, G.; Degrande, G.
2014-06-01
The numerical prediction of vibrations in buildings due to railway traffic is a complicated problem where wave propagation in the soil couples the source (railway tunnel or track) and the receiver (building). This through-soil coupling is often neglected in state-of-the-art numerical models in order to reduce the computational cost. In this paper, the effect of this simplifying assumption on the accuracy of numerical predictions is investigated. A coupled finite element-boundary element methodology is employed to analyze the interaction between a building and a railway tunnel at depth or a ballasted track at the surface of a homogeneous halfspace, respectively. Three different soil types are considered. It is demonstrated that the dynamic axle loads can be calculated with reasonable accuracy using an uncoupled strategy in which through-soil coupling is disregarded. If the transfer functions from source to receiver are considered, however, large local variations in terms of vibration insertion gain are induced by source-receiver interaction, reaching up to 10 dB and higher, although the overall wave field is only moderately affected. A global quantification of the significance of through-soil coupling is made, based on the mean vibrational energy entering a building. This approach allows assessing the common assumption in seismic engineering that source-receiver interaction can be neglected if the distance between source and receiver is sufficiently large compared to the wavelength of waves in the soil. It is observed that the interaction between a source at depth and a receiver mainly affects the power flow distribution if the distance between source and receiver is smaller than the dilatational wavelength in the soil. Interaction effects for a railway track at grade are observed if the source-receiver distance is smaller than six Rayleigh wavelengths. A similar trend is revealed if the passage of a freight train is considered. The overall influence of dynamic
Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan
2014-08-14
In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.
Defect reaction network in Si-doped InAs. Numerical predictions.
Schultz, Peter A.
2015-05-01
This Report characterizes the defects in the def ect reaction network in silicon - doped, n - type InAs predicted with first principles density functional theory. The reaction network is deduced by following exothermic defect reactions starting with the initially mobile interstitial defects reacting with common displacement damage defects in Si - doped InAs , until culminating in immobile reaction p roducts. The defect reactions and reaction energies are tabulated, along with the properties of all the silicon - related defects in the reaction network. This Report serves to extend the results for the properties of intrinsic defects in bulk InAs as colla ted in SAND 2013 - 2477 : Simple intrinsic defects in InAs : Numerical predictions to include Si - containing simple defects likely to be present in a radiation - induced defect reaction sequence . This page intentionally left blank
NASA Astrophysics Data System (ADS)
Sampath, S.; Ganesan, V.
1986-04-01
A method is offered for measuring turbulence levels in three directions in gas turbine combustion systems and high intensity industrial furnaces, using a hot wire anemometer. A detailed analysis of the turbulence in the flow is necessary to achieve optimum combustion conditions, and until now there has been no established method available for measuring turbulence in swirling and recirculating flows. The merit of the new method is the use of a single-wire probe rather than the X-probe. The method has been used to measure turbulence levels in swirling recirculating flows generated by vane swirlers. From the measured turbulence levels, the kinetic energy of turbulence has been calculated and the results are compared with a well-established numerical prediction method. Mean velocity measurements have also been made using a 3-hole Pitot probe. The agreement between the measured and predicted values is quite satisfactory.
NASA Technical Reports Server (NTRS)
Gal-Chen, T.; Schmidt, B.; Uccellini, L. W.
1985-01-01
An attempt was made to offset the limitations of GEO satellites for supplying timely initialization data for numerical weather prediction models (NWP). The NWP considered combined an isentropic representation of the free atmosphere with a sigma-coordinate model for the lower 200 mb. A flux form of the predictive equations described vertical transport interactions at the boundary of the two model domains, thereby accounting for the poor vertical temperature and wind field resolution of GEO satellite data. A variational analysis approach was employed to insert low resolution satellite-sensed temperature data at varying rates. The model vertical resolution was limited to that available from the satellite. Test simulations demonstrated that accuracy increases with the frequency of data updates, e.g., every 0.5-1 hr. The tests also showed that extensive cloud cover negates the capabilities of IR sensors and that microwave sensors will be needed for temperature estimations for 500-1000 mb levels.
Denlinger, R.P.; Iverson, R.M.
2001-01-01
Numerical solutions of the equations describing flow of variably fluidized Coulomb mixtures predict key features of dry granular avalanches and water-saturated debris flows measured in physical experiments. These features include time-dependent speeds, depths, and widths of flows as well as the geometry of resulting deposits. Threedimensional (3-D) boundary surfaces strongly influence flow dynamics because transverse shearing and cross-stream momentum transport occur where topography obstructs or redirects motion. Consequent energy dissipation can cause local deceleration and deposition, even on steep slopes. Velocities of surge fronts and other discontinuities that develop as flows cross 3-D terrain are predicted accurately by using a Riemann solution algorithm. The algorithm employs a gravity wave speed that accounts for different intensities of lateral stress transfer in regions of extending and compressing flow and in regions with different degrees of fluidization. Field observations and experiments indicate that flows in which fluid plays a significant role typically have high-friction margins with weaker interiors partly fluidized by pore pressure. Interaction of the strong perimeter and weak interior produces relatively steep-sided, flat-topped deposits. To simulate these effects, we compute pore pressure distributions using an advection-diffusion model with enhanced diffusivity near flow margins. Although challenges remain in evaluating pore pressure distributions in diverse geophysical flows, Riemann solutions of the depthaveraged 3-D Coulomb mixture equations provide a powerful tool for interpreting and predicting flow behavior. They provide a means of modeling debris flows, rock avalanches, pyroclastic flows, and related phenomena without invoking and calibrating Theological parameters that have questionable physical significance.
NASA Technical Reports Server (NTRS)
Homicz, G. F.; Moselle, J. R.
1985-01-01
A hybrid numerical procedure is presented for the prediction of the aerodynamic and acoustic performance of advanced turboprops. A hybrid scheme is proposed which in principle leads to a consistent simultaneous prediction of both fields. In the inner flow a finite difference method, the Approximate-Factorization Alternating-Direction-Implicit (ADI) scheme, is used to solve the nonlinear Euler equations. In the outer flow the linearized acoustic equations are solved via a Boundary-Integral Equation (BIE) method. The two solutions are iteratively matched across a fictitious interface in the flow so as to maintain continuity. At convergence the resulting aerodynamic load prediction will automatically satisfy the appropriate free-field boundary conditions at the edge of the finite difference grid, while the acoustic predictions will reflect the back-reaction of the radiated field on the magnitude of the loading source terms, as well as refractive effects in the inner flow. The equations and logic needed to match the two solutions are developed and the computer program implementing the procedure is described. Unfortunately, no converged solutions were obtained, due to unexpectedly large running times. The reasons for this are discussed and several means to alleviate the situation are suggested.
NASA Astrophysics Data System (ADS)
Thaker, A. A.; Chelliah, H. K.
1997-12-01
Modelling of the structure and the limiting flow turning angles of an oblique detonation wave, established by a two-dimensional wedge, requires the implementation of detailed chemical kinetic models involving a large number of chemical species. In this paper, a method of reducing the computational effort involved in simulating such high-speed reacting flows by implementing a systematically reduced reaction mechanism is presented. For a hydrogen - air mixture, starting with an elementary mechanism having eight species in 12 reactions, three alternate four-step reduced reaction mechanisms are developed by introducing the steady-state approximation for the reaction intermediates HO2, O and OH, respectively. Additional reduction of the computational effort is achieved by introducing simplifications to the thermochemical data evaluations. The influence of the numerical grid used in predicting the induction process behind the shock is also investigated. Comparisons of the induction zone predicted by two-dimensional oblique detonation wave calculations with that of a static reactor model (with initial conditions of the gas mixture specified by those behind the nonreactive oblique shock wave) are also presented. The reasonably good agreement between the three four-step reduced mechanism predictions and the starting mechanism predictions indicates that further reduction to a two-step mechanism is feasible for the physical flow time scales (corresponding to inflow Mach numbers of 8 - 10) considered here, and needs to be pursued in the future.
Mayes, Janice M; Mouraviev, Vladimir; Sun, Leon; Tsivian, Matvey; Madden, John F; Polascik, Thomas J
2011-01-01
We evaluate the reliability of routine sextant prostate biopsy to detect unilateral lesions. A total of 365 men with complete records including all clinical and pathologic variables who underwent a preoperative sextant biopsy and subsequent radical prostatectomy (RP) for clinically localized prostate cancer at our medical center between January 1996 and December 2006 were identified. When the sextant biopsy detects unilateral disease, according to RP results, the NPV is high (91%) with a low false negative rate (9%). However, the sextant biopsy has a PPV of 28% with a high false positive rate (72%). Therefore, a routine sextant prostate biopsy cannot provide reliable, accurate information about the unilaterality of tumor lesion(s).
In vivo validation of numerical prediction for turbulence intensity in an aortic coarctation.
Arzani, Amirhossein; Dyverfeldt, Petter; Ebbers, Tino; Shadden, Shawn C
2012-04-01
This paper compares numerical predictions of turbulence intensity with in vivo measurement. Magnetic resonance imaging (MRI) was carried out on a 60-year-old female with a restenosed aortic coarctation. Time-resolved three-directional phase-contrast (PC) MRI data was acquired to enable turbulence intensity estimation. A contrast-enhanced MR angiography (MRA) and a time-resolved 2D PCMRI measurement were also performed to acquire data needed to perform subsequent image-based computational fluid dynamics (CFD) modeling. A 3D model of the aortic coarctation and surrounding vasculature was constructed from the MRA data, and physiologic boundary conditions were modeled to match 2D PCMRI and pressure pulse measurements. Blood flow velocity data was subsequently obtained by numerical simulation. Turbulent kinetic energy (TKE) was computed from the resulting CFD data. Results indicate relative agreement (error ≈10%) between the in vivo measurements and the CFD predictions of TKE. The discrepancies in modeled vs. measured TKE values were within expectations due to modeling and measurement errors.
NASA Astrophysics Data System (ADS)
Yuan, K. Y.; Yuan, W.; Ju, J. W.; Yang, J. M.; Kao, W.; Carlson, L.
2013-04-01
As asphalt pavements age and deteriorate, recurring pothole repair failures and propagating alligator cracks in the asphalt pavements have become a serious issue to our daily life and resulted in high repairing costs for pavement and vehicles. To solve this urgent issue, pothole repair materials with superior durability and long service life are needed. In the present work, revolutionary pothole patching materials with high toughness, high fatigue resistance that are reinforced with nano-molecular resins have been developed to enhance their resistance to traffic loads and service life of repaired potholes. In particular, DCPD resin (dicyclopentadiene, C10H12) with a Rhuthinium-based catalyst is employed to develop controlled properties that are compatible with aggregates and asphalt binders. In this paper, a multi-level numerical micromechanics-based model is developed to predict the viscoelastic properties and dynamic moduli of these innovative nano-molecular resin reinforced pothole patching materials. Irregular coarse aggregates in the finite element analysis are modeled as randomly-dispersed multi-layers coated particles. The effective properties of asphalt mastic, which consists of fine aggregates, tar, cured DCPD and air voids are theoretically estimated by the homogenization technique of micromechanics in conjunction with the elastic-viscoelastic correspondence principle. Numerical predictions of homogenized viscoelastic properties and dynamic moduli are demonstrated.
NASA Astrophysics Data System (ADS)
Morgut, M.; Jošt, D.; Nobile, E.; Škerlavaj, A.
2015-12-01
The numerical predictions of cavitating flow around a marine propeller working in non-uniform inflow and an axial turbine are presented. The cavitating flow is modelled using the homogeneous (mixture) model. Time-dependent simulations are performed for the marine propeller case using OpenFOAM. Three calibrated mass transfer models are alternatively used to model the mass transfer rate due to cavitation and the two-equation SST (Shear Stress Transport) turbulence model is employed to close the system of the governing equations. The predictions of the cavitating flow in an axial turbine are carried out with ANSYS-CFX, where only the native mass transfer model with tuned parameters is used. Steady-state simulations are performed in combination with the SST turbulence model, while time-dependent results are obtained with the more advanced SAS (Scale Adaptive Simulation) SST model. The numerical results agree well with the available experimental measurements, and the simulations performed with the three different calibrated mass transfer models are close to each other for the propeller flow. Regarding the axial turbine the effect of the cavitation on the machine efficiency is well reproduced only by the time dependent simulations.
Development of numerical model for predicting heat generation and temperatures in MSW landfills.
Hanson, James L; Yeşiller, Nazli; Onnen, Michael T; Liu, Wei-Lien; Oettle, Nicolas K; Marinos, Janelle A
2013-10-01
A numerical modeling approach has been developed for predicting temperatures in municipal solid waste landfills. Model formulation and details of boundary conditions are described. Model performance was evaluated using field data from a landfill in Michigan, USA. The numerical approach was based on finite element analysis incorporating transient conductive heat transfer. Heat generation functions representing decomposition of wastes were empirically developed and incorporated to the formulation. Thermal properties of materials were determined using experimental testing, field observations, and data reported in literature. The boundary conditions consisted of seasonal temperature cycles at the ground surface and constant temperatures at the far-field boundary. Heat generation functions were developed sequentially using varying degrees of conceptual complexity in modeling. First a step-function was developed to represent initial (aerobic) and residual (anaerobic) conditions. Second, an exponential growth-decay function was established. Third, the function was scaled for temperature dependency. Finally, an energy-expended function was developed to simulate heat generation with waste age as a function of temperature. Results are presented and compared to field data for the temperature-dependent growth-decay functions. The formulations developed can be used for prediction of temperatures within various components of landfill systems (liner, waste mass, cover, and surrounding subgrade), determination of frost depths, and determination of heat gain due to decomposition of wastes.
Numerical Simulation of Screech Tones from Supersonic Jets: Physics and Prediction
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.; Zaman, Khairul Q. (Technical Monitor)
2002-01-01
The objectives of this project are to: (1) perform a numerical simulation of the jet screech phenomenon; and (2) use the data of the simulations to obtain a better understanding of the physics of jet screech. The original grant period was for three years. This was extended at no cost for an extra year to allow the principal investigator time to publish the results. We would like to report that our research work and results (supported by this grant) have fulfilled both objectives of the grant. The following is a summary of the important accomplishments: (1) We have now demonstrated that it is possible to perform accurate numerical simulations of the jet screech phenomenon. Both the axisymmetric case and the fully three-dimensional case were carried out successfully. It is worthwhile to note that this is the first time the screech tone phenomenon has been successfully simulated numerically; (2) All four screech modes were reproduced in the simulation. The computed screech frequencies and intensities were in good agreement with the NASA Langley Research Center data; (3) The staging phenomenon was reproduced in the simulation; (4) The effects of nozzle lip thickness and jet temperature were studied. Simulated tone frequencies at various nozzle lip thickness and jet temperature were found to agree well with experiments; (5) The simulated data were used to explain, for the first time, why there are two axisymmetric screech modes and two helical/flapping screech modes; (6) The simulated data were used to show that when two tones are observed, they co-exist rather than switching from one mode to the other, back and forth, as some previous investigators have suggested; and (7) Some resources of the grant were used to support the development of new computational aeroacoustics (CAA) methodology. (Our screech tone simulations have benefited because of the availability of these improved methods.)
An efficient numerical method for predicting the performance of valveless micropump
NASA Astrophysics Data System (ADS)
Braineard Eladi, Paul; Chatterjee, Dhiman; DasGupta, Amitava
2012-11-01
Numerical characterization of valveless micropumps involves fluid-structure interaction (FSI) between a membrane and the working fluid. FSI being computationally difficult, efforts have been mainly restricted to analyzing a given micropump performance. Designing an optimum micropump involves understanding the role of different geometric parameters and this forms the focus of the present work. It is shown that membrane displacement information extracted from a two-way coupled FSI simulation at a given frequency can be reliably used to carry out fluid flow simulations over a wide range of geometrical and operating parameters. The maximum variation between this approach and FSI is within 4% while there is a drastic reduction in computational time and resource. A micropump structure suitable for MEMS technology is considered in this work. An optimum micropump geometry, having a pump chamber height of 50 μm, diffuser length of 280 μm, throat width of 100 μm and separation distance between nozzle and diffuser openings of 2.5 mm, is recommended. The numerical prediction of flowrate at 200 Hz (68 μl min-1) for this pyramidal valveless micropump matches well with the experimental data (60 μl min-1) of the micropump fabricated using MEMS-based silicon micromachining. Thus an efficient numerical method to design valveless micropumps is proposed and validated through rigorous characterization.
NASA Technical Reports Server (NTRS)
Thomas, P. D.
1979-01-01
The theoretical foundation and formulation of a numerical method for predicting the viscous flowfield in and about isolated three dimensional nozzles of geometrically complex configuration are presented. High Reynolds number turbulent flows are of primary interest for any combination of subsonic, transonic, and supersonic flow conditions inside or outside the nozzle. An alternating-direction implicit (ADI) numerical technique is employed to integrate the unsteady Navier-Stokes equations until an asymptotic steady-state solution is reached. Boundary conditions are computed with an implicit technique compatible with the ADI technique employed at interior points of the flow region. The equations are formulated and solved in a boundary-conforming curvilinear coordinate system. The curvilinear coordinate system and computational grid is generated numerically as the solution to an elliptic boundary value problem. A method is developed that automatically adjusts the elliptic system so that the interior grid spacing is controlled directly by the a priori selection of the grid spacing on the boundaries of the flow region.
Numerical Prediction of the Hypersonic Boundary-Layer Over a Row of Microcavities
NASA Astrophysics Data System (ADS)
Theofilis, Vassilios
2002-09-01
This report results from tasking Nu-Modeling, Inc. as follows: the contractor will perform detailed numerical predictions of the flowfield in the neighborhood of the microcavities that are embedded in wall-coatings. The key deliverable of the proposed work will be the ability to put forward an improved integral condition to replace what is used in the current theoretical approach. This will be determined numerically at each of the parameters of the problem. The numerical effort required for the solution of the problem at a single value of each of the parameters involved limits to subset of the (M, Re, m, d, d/D, d/ s) parameter space that can be investigated within the available year. It is intended to approximate existing analytic results of Fedorov first, at a single set of parameters, by imposing his pressure boundary condition at the lips of the microcavities (i.e. taking D=0). The effect of nonzero values of this parameter will then be examined, keeping all other parameters in the problem constant. Subsequently, the effect of d, and 5 will be investigated, at constant D and 2(d+s). In all D1 0 cases to be studied, integral boundary conditions will be provided to the parties involved in the project. Progress of the proposed research will be monitored by means of one intermediate and one final report.
Temperature Fields in Soft Tissue during LPUS Treatment: Numerical Prediction and Experiment Results
Kujawska, Tamara; Wojcik, Janusz; Nowicki, Andrzej
2010-03-09
Recent research has shown that beneficial therapeutic effects in soft tissues can be induced by the low power ultrasound (LPUS). For example, increasing of cells immunity to stress (among others thermal stress) can be obtained through the enhanced heat shock proteins (Hsp) expression induced by the low intensity ultrasound. The possibility to control the Hsp expression enhancement in soft tissues in vivo stimulated by ultrasound can be the potential new therapeutic approach to the neurodegenerative diseases which utilizes the known feature of cells to increase their immunity to stresses through the Hsp expression enhancement. The controlling of the Hsp expression enhancement by adjusting of exposure level to ultrasound energy would allow to evaluate and optimize the ultrasound-mediated treatment efficiency. Ultrasonic regimes are controlled by adjusting the pulsed ultrasound waves intensity, frequency, duration, duty cycle and exposure time. Our objective was to develop the numerical model capable of predicting in space and time temperature fields induced by a circular focused transducer generating tone bursts in multilayer nonlinear attenuating media and to compare the numerically calculated results with the experimental data in vitro. The acoustic pressure field in multilayer biological media was calculated using our original numerical solver. For prediction of temperature fields the Pennes' bio-heat transfer equation was employed. Temperature field measurements in vitro were carried out in a fresh rat liver using the 15 mm diameter, 25 mm focal length and 2 MHz central frequency transducer generating tone bursts with the spatial peak temporal average acoustic intensity varied between 0.325 and 1.95 W/cm{sup 2}, duration varied from 20 to 500 cycles at the same 20% duty cycle and the exposure time varied up to 20 minutes. The measurement data were compared with numerical simulation results obtained under experimental boundary conditions. Good agreement between
Temperature Fields in Soft Tissue during LPUS Treatment: Numerical Prediction and Experiment Results
NASA Astrophysics Data System (ADS)
Kujawska, Tamara; Wójcik, Janusz; Nowicki, Andrzej
2010-03-01
Recent research has shown that beneficial therapeutic effects in soft tissues can be induced by the low power ultrasound (LPUS). For example, increasing of cells immunity to stress (among others thermal stress) can be obtained through the enhanced heat shock proteins (Hsp) expression induced by the low intensity ultrasound. The possibility to control the Hsp expression enhancement in soft tissues in vivo stimulated by ultrasound can be the potential new therapeutic approach to the neurodegenerative diseases which utilizes the known feature of cells to increase their immunity to stresses through the Hsp expression enhancement. The controlling of the Hsp expression enhancement by adjusting of exposure level to ultrasound energy would allow to evaluate and optimize the ultrasound-mediated treatment efficiency. Ultrasonic regimes are controlled by adjusting the pulsed ultrasound waves intensity, frequency, duration, duty cycle and exposure time. Our objective was to develop the numerical model capable of predicting in space and time temperature fields induced by a circular focused transducer generating tone bursts in multilayer nonlinear attenuating media and to compare the numerically calculated results with the experimental data in vitro. The acoustic pressure field in multilayer biological media was calculated using our original numerical solver. For prediction of temperature fields the Pennes' bio-heat transfer equation was employed. Temperature field measurements in vitro were carried out in a fresh rat liver using the 15 mm diameter, 25 mm focal length and 2 MHz central frequency transducer generating tone bursts with the spatial peak temporal average acoustic intensity varied between 0.325 and 1.95 W/cm2, duration varied from 20 to 500 cycles at the same 20% duty cycle and the exposure time varied up to 20 minutes. The measurement data were compared with numerical simulation results obtained under experimental boundary conditions. Good agreement between the
A Hemodynamic Predict of an Intra-Aorta Pump Application in Vitro Using Numerical Analysis
NASA Astrophysics Data System (ADS)
Gao, Bin; Chen, Ningning; Chang, Yu
The Intra-Aorta Pump is a novel LVAD assisting the native heart without percutaneous drive-lines. The Intra-Aorta Pump is emplaced between the radix aortae and the aortic arch to draw-off the blood from the left ventricle to the aorta. To predict the change of pressure drop and blood flow along with the change of pump speed, a nonlinear model has been made based on the structure and speed of the Intra-Aorta Pump. To do this, a nonlinear electric circuit for the Intra-Aorta Pump has been developed. The model includes two speed dependent current sources and flow dependent resistant to simulate the relationship between the pressure drop of the Intra-Aorta Pump and the flow through the pump along with the change of pump speed. The pressure drop and blood flow is derived by solving differential equations with variable coefficients. The parameters of the model are determined by experiment, and the results of the experiment show that these parameters change along with the change of the pump speed distinctness. The accuracy of the model is tested experimentally on a test loop. The comparison of the prediction data derived from the model with the experimental data shows that the error is lest than 15%. The experimental results showed that the model can predict the change of pressure drop and blood flow accurately.
NASA Astrophysics Data System (ADS)
Lenarcic, M.; Bauer, Ch.; Giese, M.; Jung, A.
2016-11-01
The prediction of characteristics and flow phenomena in reversible pump-turbines becomes increasingly important, since operations under off-design conditions are required to respond to frequency fluctuations within the electrical grid as fast as possible. Fulfilling the requirements of a stable and reliable operation under continuously expanding operating ranges challenges the hydraulic design and requires ambitious developments. Beyond that, precise estimations of occurring flow phenomena combined with a detailed understanding of their causes and mechanisms are essential. This study aims at predicting the S-shaped characteristics of two reversible pump-turbines by using different numerical approaches. Therefore, measurements at a constant wicket-gate opening of Δγ = 10° were done. Based on these experimental data, unsteady flow simulations are performed under steady and transient operating conditions respectively: Starting from the best efficiency point in generating mode, through the runaway, along the S-curve, down to operation in reverse pump mode. The hydraulic machines are spatially discretized in model size with a near-wall refinement of y + mean ≤ 5 and y + mean ≥ 30. The application of two different solvers discloses deviations in underlying methods. The turbulence modeling is basically executed by the k-ω-SST and the standard k-ɛ model. Focusing on higher order numerics, the Explicit Algebraic Reynolds Stress Model (EARSM) is selected in the commercial code and extended with an approach for curvature correction (EARSM- CC). In the open-source software, the four-equation v2-f model assumes the role of higher order numerics. The temporal discretization errors are observed using three different time-step sizes. As a supplement, experimental data obtained from the HydroDyna pump-turbine are used as additional validation, providing integral quantities and local pressure distributions at an operating point set on the S-curve. To sum this work up, a
NASA Astrophysics Data System (ADS)
Williams, Kevin Vaughan
Rapid growth in use of composite materials in structural applications drives the need for a more detailed understanding of damage tolerant and damage resistant design. Current analytical techniques provide sufficient understanding and predictive capabilities for application in preliminary design, but current numerical models applicable to composites are few and far between and their development into well tested, rigorous material models is currently one of the most challenging fields in composite materials. The present work focuses on the development, implementation, and verification of a plane-stress continuum damage mechanics based model for composite materials. A physical treatment of damage growth based on the extensive body of experimental literature on the subject is combined with the mathematical rigour of a continuum damage mechanics description to form the foundation of the model. The model has been implemented in the LS-DYNA3D commercial finite element hydrocode and the results of the application of the model are shown to be physically meaningful and accurate. Furthermore it is demonstrated that the material characterization parameters can be extracted from the results of standard test methodologies for which a large body of published data already exists for many materials. Two case studies are undertaken to verify the model by comparison with measured experimental data. The first series of analyses demonstrate the ability of the model to predict the extent and growth of damage in T800/3900-2 carbon fibre reinforced polymer (CFRP) plates subjected to normal impacts over a range of impact energy levels. The predicted force-time and force-displacement response of the panels compare well with experimental measurements. The damage growth and stiffness reduction properties of the T800/3900-2 CFRP are derived using published data from a variety of sources without the need for parametric studies. To further demonstrate the physical nature of the model, a IM6
Luo, Wei; Nguyen, Thin; Nichols, Melanie; Tran, Truyen; Rana, Santu; Gupta, Sunil; Phung, Dinh; Venkatesh, Svetha; Allender, Steve
2015-01-01
For years, we have relied on population surveys to keep track of regional public health statistics, including the prevalence of non-communicable diseases. Because of the cost and limitations of such surveys, we often do not have the up-to-date data on health outcomes of a region. In this paper, we examined the feasibility of inferring regional health outcomes from socio-demographic data that are widely available and timely updated through national censuses and community surveys. Using data for 50 American states (excluding Washington DC) from 2007 to 2012, we constructed a machine-learning model to predict the prevalence of six non-communicable disease (NCD) outcomes (four NCDs and two major clinical risk factors), based on population socio-demographic characteristics from the American Community Survey. We found that regional prevalence estimates for non-communicable diseases can be reasonably predicted. The predictions were highly correlated with the observed data, in both the states included in the derivation model (median correlation 0.88) and those excluded from the development for use as a completely separated validation sample (median correlation 0.85), demonstrating that the model had sufficient external validity to make good predictions, based on demographics alone, for areas not included in the model development. This highlights both the utility of this sophisticated approach to model development, and the vital importance of simple socio-demographic characteristics as both indicators and determinants of chronic disease. PMID:25938675
Luo, Wei; Nguyen, Thin; Nichols, Melanie; Tran, Truyen; Rana, Santu; Gupta, Sunil; Phung, Dinh; Venkatesh, Svetha; Allender, Steve
2015-01-01
For years, we have relied on population surveys to keep track of regional public health statistics, including the prevalence of non-communicable diseases. Because of the cost and limitations of such surveys, we often do not have the up-to-date data on health outcomes of a region. In this paper, we examined the feasibility of inferring regional health outcomes from socio-demographic data that are widely available and timely updated through national censuses and community surveys. Using data for 50 American states (excluding Washington DC) from 2007 to 2012, we constructed a machine-learning model to predict the prevalence of six non-communicable disease (NCD) outcomes (four NCDs and two major clinical risk factors), based on population socio-demographic characteristics from the American Community Survey. We found that regional prevalence estimates for non-communicable diseases can be reasonably predicted. The predictions were highly correlated with the observed data, in both the states included in the derivation model (median correlation 0.88) and those excluded from the development for use as a completely separated validation sample (median correlation 0.85), demonstrating that the model had sufficient external validity to make good predictions, based on demographics alone, for areas not included in the model development. This highlights both the utility of this sophisticated approach to model development, and the vital importance of simple socio-demographic characteristics as both indicators and determinants of chronic disease.
A Maximal Graded Exercise Test to Accurately Predict VO2max in 18-65-Year-Old Adults
ERIC Educational Resources Information Center
George, James D.; Bradshaw, Danielle I.; Hyde, Annette; Vehrs, Pat R.; Hager, Ronald L.; Yanowitz, Frank G.
2007-01-01
The purpose of this study was to develop an age-generalized regression model to predict maximal oxygen uptake (VO sub 2 max) based on a maximal treadmill graded exercise test (GXT; George, 1996). Participants (N = 100), ages 18-65 years, reached a maximal level of exertion (mean plus or minus standard deviation [SD]; maximal heart rate [HR sub…
Karwath, Andreas; Clare, Amanda; Dehaspe, Luc
2000-01-01
The analysis of genomics data needs to become as automated as its generation. Here we present a novel data-mining approach to predicting protein functional class from sequence. This method is based on a combination of inductive logic programming clustering and rule learning. We demonstrate the effectiveness of this approach on the M. tuberculosis and E. coli genomes, and identify biologically interpretable rules which predict protein functional class from information only available from the sequence. These rules predict 65% of the ORFs with no assigned function in M. tuberculosis and 24% of those in E. coli, with an estimated accuracy of 60–80% (depending on the level of functional assignment). The rules are founded on a combination of detection of remote homology, convergent evolution and horizontal gene transfer. We identify rules that predict protein functional class even in the absence of detectable sequence or structural homology. These rules give insight into the evolutionary history of M. tuberculosis and E. coli. PMID:11119305
Danner, Holger; Desurmont, Gaylord A; Cristescu, Simona M; van Dam, Nicole M
2017-01-30
Herbivore-induced plant volatiles (HIPVs) serve as specific cues to higher trophic levels. Novel, exotic herbivores entering native foodwebs may disrupt the infochemical network as a result of changes in HIPV profiles. Here, we analysed HIPV blends of native Brassica rapa plants infested with one of 10 herbivore species with different coexistence histories, diet breadths and feeding modes. Partial least squares (PLS) models were fitted to assess whether HIPV blends emitted by Dutch B. rapa differ between native and exotic herbivores, between specialists and generalists, and between piercing-sucking and chewing herbivores. These models were used to predict the status of two additional herbivores. We found that HIPV blends predicted the evolutionary history, diet breadth and feeding mode of the herbivore with an accuracy of 80% or higher. Based on the HIPVs, the PLS models reliably predicted that Trichoplusia ni and Spodoptera exigua are perceived as exotic, leaf-chewing generalists by Dutch B. rapa plants. These results indicate that there are consistent and predictable differences in HIPV blends depending on global herbivore characteristics, including coexistence history. Consequently, native organisms may be able to rapidly adapt to potentially disruptive effects of exotic herbivores on the infochemical network.
Gusenleitner, Daniel; Auerbach, Scott S.; Melia, Tisha; Gómez, Harold F.; Sherr, David H.; Monti, Stefano
2014-01-01
Background Despite an overall decrease in incidence of and mortality from cancer, about 40% of Americans will be diagnosed with the disease in their lifetime, and around 20% will die of it. Current approaches to test carcinogenic chemicals adopt the 2-year rodent bioassay, which is costly and time-consuming. As a result, fewer than 2% of the chemicals on the market have actually been tested. However, evidence accumulated to date suggests that gene expression profiles from model organisms exposed to chemical compounds reflect underlying mechanisms of action, and that these toxicogenomic models could be used in the prediction of chemical carcinogenicity. Results In this study, we used a rat-based microarray dataset from the NTP DrugMatrix Database to test the ability of toxicogenomics to model carcinogenicity. We analyzed 1,221 gene-expression profiles obtained from rats treated with 127 well-characterized compounds, including genotoxic and non-genotoxic carcinogens. We built a classifier that predicts a chemical's carcinogenic potential with an AUC of 0.78, and validated it on an independent dataset from the Japanese Toxicogenomics Project consisting of 2,065 profiles from 72 compounds. Finally, we identified differentially expressed genes associated with chemical carcinogenesis, and developed novel data-driven approaches for the molecular characterization of the response to chemical stressors. Conclusion Here, we validate a toxicogenomic approach to predict carcinogenicity and provide strong evidence that, with a larger set of compounds, we should be able to improve the sensitivity and specificity of the predictions. We found that the prediction of carcinogenicity is tissue-dependent and that the results also confirm and expand upon previous studies implicating DNA damage, the peroxisome proliferator-activated receptor, the aryl hydrocarbon receptor, and regenerative pathology in the response to carcinogen exposure. PMID:25058030
A Case Study of the Impact of AIRS Temperature Retrievals on Numerical Weather Prediction
NASA Technical Reports Server (NTRS)
Reale, O.; Atlas, R.; Jusem, J. C.
2004-01-01
Large errors in numerical weather prediction are often associated with explosive cyclogenesis. Most studes focus on the under-forecasting error, i.e. cases of rapidly developing cyclones which are poorly predicted in numerical models. However, the over-forecasting error (i.e., to predict an explosively developing cyclone which does not occur in reality) is a very common error that severely impacts the forecasting skill of all models and may also present economic costs if associated with operational forecasting. Unnecessary precautions taken by marine activities can result in severe economic loss. Moreover, frequent occurrence of over-forecasting can undermine the reliance on operational weather forecasting. Therefore, it is important to understand and reduce the prdctions of extreme weather associated with explosive cyclones which do not actually develop. In this study we choose a very prominent case of over-forecasting error in the northwestern Pacific. A 960 hPa cyclone develops in less than 24 hour in the 5-day forecast, with a deepening rate of about 30 hPa in one day. The cyclone is not versed in the analyses and is thus a case of severe over-forecasting. By assimilating AIRS data, the error is largely eliminated. By following the propagation of the anomaly that generates the spurious cyclone, it is found that a small mid-tropospheric geopotential height negative anomaly over the northern part of the Indian subcontinent in the initial conditions, propagates westward, is amplified by orography, and generates a very intense jet streak in the subtropical jet stream, with consequent explosive cyclogenesis over the Pacific. The AIRS assimilation eliminates this anomaly that may have been caused by erroneous upper-air data, and represents the jet stream more correctly. The energy associated with the jet is distributed over a much broader area and as a consequence a multiple, but much more moderate cyclogenesis is observed.
NASA Technical Reports Server (NTRS)
Zavodsky, Bradley; Chou, Shih-Hung; Jedlovec, Gary
2012-01-01
Improvements to global and regional numerical weather prediction (NWP) have been demonstrated through assimilation of data from NASA s Atmospheric Infrared Sounder (AIRS). Current operational data assimilation systems use AIRS radiances, but impact on regional forecasts has been much smaller than for global forecasts. Retrieved profiles from AIRS contain much of the information that is contained in the radiances and may be able to reveal reasons for this reduced impact. Assimilating AIRS retrieved profiles in an identical analysis configuration to the radiances, tracking the quantity and quality of the assimilated data in each technique, and examining analysis increments and forecast impact from each data type can yield clues as to the reasons for the reduced impact. By doing this with regional scale models individual synoptic features (and the impact of AIRS on these features) can be more easily tracked. This project examines the assimilation of hyperspectral sounder data used in operational numerical weather prediction by comparing operational techniques used for AIRS radiances and research techniques used for AIRS retrieved profiles. Parallel versions of a configuration of the Weather Research and Forecasting (WRF) model with Gridpoint Statistical Interpolation (GSI) that mimics the analysis methodology, domain, and observational datasets for the regional North American Mesoscale (NAM) model run at the National Centers for Environmental Prediction (NCEP)/Environmental Modeling Center (EMC) are run to examine the impact of each type of AIRS data set. The first configuration will assimilate the AIRS radiance data along with other conventional and satellite data using techniques implemented within the operational system; the second configuration will assimilate AIRS retrieved profiles instead of AIRS radiances in the same manner. Preliminary results of this study will be presented and focus on the analysis impact of the radiances and profiles for selected cases.
NASA Astrophysics Data System (ADS)
Pau, George Shu Heng; Shen, Chaopeng; Riley, William J.; Liu, Yaning
2016-02-01
The topography, and the biotic and abiotic parameters are typically upscaled to make watershed-scale hydrologic-biogeochemical models computationally tractable. However, upscaling procedure can produce biases when nonlinear interactions between different processes are not fully captured at coarse resolutions. Here we applied the Proper Orthogonal Decomposition Mapping Method (PODMM) to downscale the field solutions from a coarse (7 km) resolution grid to a fine (220 m) resolution grid. PODMM trains a reduced-order model (ROM) with coarse-resolution and fine-resolution solutions, here obtained using PAWS+CLM, a quasi-3-D watershed processes model that has been validated for many temperate watersheds. Subsequent fine-resolution solutions were approximated based only on coarse-resolution solutions and the ROM. The approximation errors were efficiently quantified using an error estimator. By jointly estimating correlated variables and temporally varying the ROM parameters, we further reduced the approximation errors by up to 20%. We also improved the method's robustness by constructing multiple ROMs using different set of variables, and selecting the best approximation based on the error estimator. The ROMs produced accurate downscaling of soil moisture, latent heat flux, and net primary production with O(1000) reduction in computational cost. The subgrid distributions were also nearly indistinguishable from the ones obtained using the fine-resolution model. Compared to coarse-resolution solutions, biases in upscaled ROM solutions were reduced by up to 80%. This method has the potential to help address the long-standing spatial scaling problem in hydrology and enable long-time integration, parameter estimation, and stochastic uncertainty analysis while accurately representing the heterogeneities.
2014-01-01
Background Locating the protein-coding genes in novel genomes is essential to understanding and exploiting the genomic information but it is still difficult to accurately predict all the genes. The recent availability of detailed information about transcript structure from high-throughput sequencing of messenger RNA (RNA-Seq) delineates many expressed genes and promises increased accuracy in gene prediction. Computational gene predictors have been intensively developed for and tested in well-studied animal genomes. Hundreds of fungal genomes are now or will soon be sequenced. The differences of fungal genomes from animal genomes and the phylogenetic sparsity of well-studied fungi call for gene-prediction tools tailored to them. Results SnowyOwl is a new gene prediction pipeline that uses RNA-Seq data to train and provide hints for the generation of Hidden Markov Model (HMM)-based gene predictions and to evaluate the resulting models. The pipeline has been developed and streamlined by comparing its predictions to manually curated gene models in three fungal genomes and validated against the high-quality gene annotation of Neurospora crassa; SnowyOwl predicted N. crassa genes with 83% sensitivity and 65% specificity. SnowyOwl gains sensitivity by repeatedly running the HMM gene predictor Augustus with varied input parameters and selectivity by choosing the models with best homology to known proteins and best agreement with the RNA-Seq data. Conclusions SnowyOwl efficiently uses RNA-Seq data to produce accurate gene models in both well-studied and novel fungal genomes. The source code for the SnowyOwl pipeline (in Python) and a web interface (in PHP) is freely available from http://sourceforge.net/projects/snowyowl/. PMID:24980894
Barone, Veronica; Hod, Oded; Peralta, Juan E; Scuseria, Gustavo E
2011-04-19
Over the last several years, low-dimensional graphene derivatives, such as carbon nanotubes and graphene nanoribbons, have played a central role in the pursuit of a plausible carbon-based nanotechnology. Their electronic properties can be either metallic or semiconducting depending purely on morphology, but predicting their electronic behavior has proven challenging. The combination of experimental efforts with modeling of these nanometer-scale structures has been instrumental in gaining insight into their physical and chemical properties and the processes involved at these scales. Particularly, approximations based on density functional theory have emerged as a successful computational tool for predicting the electronic structure of these materials. In this Account, we review our efforts in modeling graphitic nanostructures from first principles with hybrid density functionals, namely the Heyd-Scuseria-Ernzerhof (HSE) screened exchange hybrid and the hybrid meta-generalized functional of Tao, Perdew, Staroverov, and Scuseria (TPSSh). These functionals provide a powerful tool for quantitatively studying structure-property relations and the effects of external perturbations such as chemical substitutions, electric and magnetic fields, and mechanical deformations on the electronic and magnetic properties of these low-dimensional carbon materials. We show how HSE and TPSSh successfully predict the electronic properties of these materials, providing a good description of their band structure and density of states, their work function, and their magnetic ordering in the cases in which magnetism arises. Moreover, these approximations are capable of successfully predicting optical transitions (first and higher order) in both metallic and semiconducting single-walled carbon nanotubes of various chiralities and diameters with impressive accuracy. This versatility includes the correct prediction of the trigonal warping splitting in metallic nanotubes. The results predicted
NASA Astrophysics Data System (ADS)
Peng, Xindong; Che, Yuzhang; Chang, Jun
2013-08-01
Using the concept of anomaly integration and historical climate data, we have developed a novel operational framework to implement deterministic numerical weather prediction within 15 days. Real-case validation shows pronounced improvements in the forecasts of global geopotential heights in 20 out of 30 cases with the Community Atmosphere Model version 3.0. Seven other cases are marginally improved, and only three are deteriorated, in which all are ameliorated within the first-week period. The average of the 30 cases shows an obvious increase of anomaly correlation coefficient (ACC) and a decrease of root mean square error (RMSE) of the geopotential height over global, hemispherical, and tropical zones. Significant amelioration on tropical circulation is displayed within the first-week prediction. The forecasting skill is extended by 0.6 day in terms of days of the ACC greater than 0.6 for 500 hPa 30 case averaged geopotential height on global scale. The 30 case mean ACC and RMSE of 500 hPa temperature show the increment of 0.2 and -1.6 K, respectively, in the first-week prediction. In the case of January 2008, much more reasonable horizontal distribution and vertical structure are achieved in bias-corrected model geopotential height, temperature, relative humidity, and horizontal wind components in comparison to reanalysis data. In spite of a need for additional storage of historical modeling data, the new method does not increase computational costs and therefore is suitable for routine application.
Numerical prediction of transition of the F-16 wing at supersonic speeds
NASA Technical Reports Server (NTRS)
Cummings, Russell M.; Garcia, Joseph A.
1993-01-01
A parametric study is being conducted as an effort to numerically predict the extent of natural laminar flow (NLF) on finite swept wings at supersonic speeds. This study is one aspect of a High Speed Research Program (HSRP) to gain an understanding of the technical requirements for high-speed aircraft flight. The parameters that are being addressed in this study are Reynolds number, angle of attack, and leading-edge wing sweep. These parameters were analyzed through the use of an advanced Computational Fluid Dynamics (CFD) flow solver, specifically the ARC 3-D Compressible Navier-Stokes (CNS) flow solver. From the CNS code, pressure coefficients (Cp) are obtained for the various cases. These Cp's are then used to compute the boundary-layer profiles through the use of the 'Kaups and Cebeci' compressible 2-D boundary layer code. Finally, the boundary-layer parameters are processed into a 3-D compressible boundary layer stability code (COSAL) to predict transition. The parametric study then consisted of four geometries which addressed the effects of sweep, and three angles of attack from zero to ten degrees to yield a total of 12 cases. The above process was substantially automated through a procedure that was developed by the work conducted under this study. This automation procedure then yields a 3-D graphical measure of the extent of laminar flow by predicting the transition location of laminar to turbulent flow.
Development of a 3D numerical methodology for fast prediction of gun blast induced loading
NASA Astrophysics Data System (ADS)
Costa, E.; Lagasco, F.
2014-05-01
In this paper, the development of a methodology based on semi-empirical models from the literature to carry out 3D prediction of pressure loading on surfaces adjacent to a weapon system during firing is presented. This loading is consequent to the impact of the blast wave generated by the projectile exiting the muzzle bore. When exceeding a pressure threshold level, loading is potentially capable to induce unwanted damage to nearby hard structures as well as frangible panels or electronic equipment. The implemented model shows the ability to quickly predict the distribution of the blast wave parameters over three-dimensional complex geometry surfaces when the weapon design and emplacement data as well as propellant and projectile characteristics are available. Considering these capabilities, the use of the proposed methodology is envisaged as desirable in the preliminary design phase of the combat system to predict adverse effects and then enable to identify the most appropriate countermeasures. By providing a preliminary but sensitive estimate of the operative environmental loading, this numerical means represents a good alternative to more powerful, but time consuming advanced computational fluid dynamics tools, which use can, thus, be limited to the final phase of the design.
Coyle, Whitney L; Guillemain, Philippe; Kergomard, Jean; Dalmont, Jean-Pierre
2015-11-01
When designing a wind instrument such as a clarinet, it can be useful to be able to predict the playing frequencies. This paper presents an analytical method to deduce these playing frequencies using the input impedance curve. Specifically there are two control parameters that have a significant influence on the playing frequency, the blowing pressure and reed opening. Four effects are known to alter the playing frequency and are examined separately: the flow rate due to the reed motion, the reed dynamics, the inharmonicity of the resonator, and the temperature gradient within the clarinet. The resulting playing frequencies for the first register of a particular professional level clarinet are found using the analytical formulas presented in this paper. The analytical predictions are then compared to numerically simulated results to validate the prediction accuracy. The main conclusion is that in general the playing frequency decreases above the oscillation threshold because of inharmonicity, then increases above the beating reed regime threshold because of the decrease of the flow rate effect.
2013-08-01
24 Figure 20. ABQ experiment showing five volunteers located 1.0 m from source in upper-left panel wearing...study (Royster et al.,1996) in which users self-fit hearing protectors (ANSI S12.6- 2008 method B: user fit) with no experimenter instruction gives an...values provided by the experimenters and simulator fits for the intact and modified muffs. Figure 22 (upper panel) shows the simulator prediction
NASA Astrophysics Data System (ADS)
Cleves, Ann E.; Jain, Ajay N.
2015-06-01
Prediction of the bound configuration of small-molecule ligands that differ substantially from the cognate ligand of a protein co-crystal structure is much more challenging than re-docking the cognate ligand. Success rates for cross-docking in the range of 20-30 % are common. We present an approach that uses structural information known prior to a particular cutoff-date to make predictions on ligands whose bounds structures were determined later. The knowledge-guided docking protocol was tested on a set of ten protein targets using a total of 949 ligands. The benchmark data set, called PINC ("PINC Is Not Cognate"), is publicly available. Protein pocket similarity was used to choose representative structures for ensemble-docking. The docking protocol made use of known ligand poses prior to the cutoff-date, both to help guide the configurational search and to adjust the rank of predicted poses. Overall, the top-scoring pose family was correct over 60 % of the time, with the top-two pose families approaching a 75 % success rate. Correct poses among all those predicted were identified nearly 90 % of the time. The largest improvements came from the use of molecular similarity to improve ligand pose rankings and the strategy for identifying representative protein structures. With the exception of a single outlier target, the knowledge-guided docking protocol produced results matching the quality of cognate-ligand re-docking, but it did so on a very challenging temporally-segregated cross-docking benchmark.
Cleves, Ann E; Jain, Ajay N
2015-06-01
Prediction of the bound configuration of small-molecule ligands that differ substantially from the cognate ligand of a protein co-crystal structure is much more challenging than re-docking the cognate ligand. Success rates for cross-docking in the range of 20-30 % are common. We present an approach that uses structural information known prior to a particular cutoff-date to make predictions on ligands whose bounds structures were determined later. The knowledge-guided docking protocol was tested on a set of ten protein targets using a total of 949 ligands. The benchmark data set, called PINC ("PINC Is Not Cognate"), is publicly available. Protein pocket similarity was used to choose representative structures for ensemble-docking. The docking protocol made use of known ligand poses prior to the cutoff-date, both to help guide the configurational search and to adjust the rank of predicted poses. Overall, the top-scoring pose family was correct over 60 % of the time, with the top-two pose families approaching a 75 % success rate. Correct poses among all those predicted were identified nearly 90 % of the time. The largest improvements came from the use of molecular similarity to improve ligand pose rankings and the strategy for identifying representative protein structures. With the exception of a single outlier target, the knowledge-guided docking protocol produced results matching the quality of cognate-ligand re-docking, but it did so on a very challenging temporally-segregated cross-docking benchmark.
Li, Haibo; Ding, Jie; Wen, Ping; Zhang, Qin; Xiang, Jingjing; Li, Qiong; Xuan, Liming; Kong, Lingyin; Mao, Yan; Zhu, Yijun; Shen, Jingjing; Liang, Bo; Li, Hong
2016-01-01
Massively parallel sequencing (MPS) combined with bioinformatic analysis has been widely applied to detect fetal chromosomal aneuploidies such as trisomy 21, 18, 13 and sex chromosome aneuploidies (SCAs) by sequencing cell-free fetal DNA (cffDNA) from maternal plasma, so-called non-invasive prenatal testing (NIPT). However, many technical challenges, such as dependency on correct fetal sex prediction, large variations of chromosome Y measurement and high sensitivity to random reads mapping, may result in higher false negative rate (FNR) and false positive rate (FPR) in fetal sex prediction as well as in SCAs detection. Here, we developed an optimized method to improve the accuracy of the current method by filtering out randomly mapped reads in six specific regions of the Y chromosome. The method reduces the FNR and FPR of fetal sex prediction from nearly 1% to 0.01% and 0.06%, respectively and works robustly under conditions of low fetal DNA concentration (1%) in testing and simulation of 92 samples. The optimized method was further confirmed by large scale testing (1590 samples), suggesting that it is reliable and robust enough for clinical testing. PMID:27441628
Baldassi, Carlo; Zamparo, Marco; Feinauer, Christoph; Procaccini, Andrea; Zecchina, Riccardo; Weigt, Martin; Pagnani, Andrea
2014-01-01
In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation) have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids), exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i) the prediction of residue-residue contacts in proteins, and (ii) the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code.
Li, Xiaowei; Liu, Taigang; Tao, Peiying; Wang, Chunhua; Chen, Lanming
2015-12-01
Structural class characterizes the overall folding type of a protein or its domain. Many methods have been proposed to improve the prediction accuracy of protein structural class in recent years, but it is still a challenge for the low-similarity sequences. In this study, we introduce a feature extraction technique based on auto cross covariance (ACC) transformation of position-specific score matrix (PSSM) to represent a protein sequence. Then support vector machine-recursive feature elimination (SVM-RFE) is adopted to select top K features according to their importance and these features are input to a support vector machine (SVM) to conduct the prediction. Performance evaluation of the proposed method is performed using the jackknife test on three low-similarity datasets, i.e., D640, 1189 and 25PDB. By means of this method, the overall accuracies of 97.2%, 96.2%, and 93.3% are achieved on these three datasets, which are higher than those of most existing methods. This suggests that the proposed method could serve as a very cost-effective tool for predicting protein structural class especially for low-similarity datasets.
Eguchi, K; Hoshide, S; Shimada, K; Kario, K
2014-12-01
We tested the hypothesis that multiple clinic blood pressure (BP) readings over an extended baseline period would be as predictive as ambulatory BP (ABP) for cardiovascular disease (CVD). Clinic and ABP monitoring were performed in 457 hypertensive patients at baseline. Clinic BP was measured monthly and the means of the first 3, 5 and 10 clinic BP readings were taken as the multiple clinic BP readings. The subjects were followed up, and stroke, HARD CVD, and ALL CVD events were determined as outcomes. In multivariate Cox regression analyses, ambulatory systolic BP (SBP) best predicted three outcomes independently of baseline and multiple clinic SBP readings. The mean of 10 clinic SBP readings predicted stroke (hazards ratio (HR)=1.39, 95% confidence interval (CI)=1.02-1.90, P=0.04) and ALL CVD (HR=1.41, 95% CI=1.13-1.74, P=0.002) independently of baseline clinic SBP. Clinic SBPs by three and five readings were not associated with any CVD events, except that clinic SBP by three readings was associated with ALL CVD (P=0.015). Besides ABP values, the mean of the first 10 clinic SBP values was a significant predictor of stroke and ALL CVD events. It is important to take more than several clinic BP readings early after the baseline period for the risk stratification of future CVD events.
Pearce, K L; Ferguson, M; Gardner, G; Smith, N; Greef, J; Pethick, D W
2009-01-01
Fifty merino wethers (liveweight range from 44 to 81kg, average of 58.6kg) were lot fed for 42d and scanned through a dual X-ray absorptiometry (DXA) as both a live animal and whole carcass (carcass weight range from 15 to 32kg, average of 22.9kg) producing measures of total tissue, lean, fat and bone content. The carcasses were subsequently boned out into saleable cuts and the weights and yield of boned out muscle, fat and bone recorded. The relationship between chemical lean (protein+water) was highly correlated with DXA carcass lean (r(2)=0.90, RSD=0.674kg) and moderately with DXA live lean (r(2)=0.72, RSD=1.05kg). The relationship between the chemical fat was moderately correlated with DXA carcass fat (r(2)=0.86, RSD=0.42kg) and DXA live fat (r(2)=0.70, RSD=0.71kg). DXA carcass and live animal bone was not well correlated with chemical ash (both r(2)=0.38, RSD=0.3). DXA carcass lean was moderately well predicted from DXA live lean with the inclusion of bodyweight in the regression (r(2)=0.82, RSD=0.87kg). DXA carcass fat was well predicted from DXA live fat (r(2)=0.86, RSD=0.54kg). DXA carcass lean and DXA carcass fat with the inclusion of carcass weight in the regression significantly predicted boned out muscle (r(2)=0.97, RSD=0.32kg) and fat weight, respectively (r(2)=0.92, RSD=0.34kg). The use of DXA live lean and DXA live fat with the inclusion of bodyweight to predict boned out muscle (r(2)=0.83, RSD=0.75kg) and fat (r(2)=0.86, RSD=0.46kg) weight, respectively, was moderate. The use of DXA carcass and live lean and fat to predict boned out muscle and fat yield was not correlated as weight. The future for the DXA will exist in the determination of body composition in live animals and carcasses in research experiments but there is potential for the DXA to be used as an online carcass grading system.
Cloud-Based Numerical Weather Prediction for Near Real-Time Forecasting and Disaster Response
NASA Technical Reports Server (NTRS)
Molthan, Andrew; Case, Jonathan; Venners, Jason; Schroeder, Richard; Checchi, Milton; Zavodsky, Bradley; Limaye, Ashutosh; O'Brien, Raymond
2015-01-01
activities in environmental monitoring and prediction across a growing number of regional hubs throughout the world. Capacity-building applications that extend numerical weather prediction to developing countries are intended to provide near real-time applications to benefit public health, safety, and economic interests, but may have a greater impact during disaster events by providing a source for local predictions of weather-related hazards, or impacts that local weather events may have during the recovery phase.
Numerical Weather Prediction Models on Linux Boxes as tools in meteorological education in Hungary
NASA Astrophysics Data System (ADS)
Gyongyosi, A. Z.; Andre, K.; Salavec, P.; Horanyi, A.; Szepszo, G.; Mille, M.; Tasnadi, P.; Weidiger, T.
2012-04-01
Education of Meteorologist in Hungary - according to the Bologna Process - has three stages: BSc, MSc and PhD, and students graduating at each stage get the respective degree (BSc, MSc and PhD). The three year long base BSc course in Meteorology can be chosen by undergraduate students in the fields of Geosciences, Environmental Sciences and Physics. BasicsFundamentals in Mathematics (Calculus), Physics (General and Theoretical) Physics and Informatics are emphasized during their elementary education. The two year long MSc course - in which about 15 to 25 students are admitted each year - can be studied only at our the Eötvös Loránd uUniversity in the our country. Our aim is to give a basic education in all fields of Meteorology. Main topics are: Climatology, Atmospheric Physics, Atmospheric Chemistry, Dynamic and Synoptic Meteorology, Numerical Weather Prediction, modeling Modeling of surfaceSurface-atmosphere Iinteractions and Cclimate change. Education is performed in two branches: Climate Researcher and Forecaster. Education of Meteorologist in Hungary - according to the Bologna Process - has three stages: BSc, MSc and PhD, and students graduating at each stage get the respective degree. The three year long BSc course in Meteorology can be chosen by undergraduate students in the fields of Geosciences, Environmental Sciences and Physics. Fundamentals in Mathematics (Calculus), (General and Theoretical) Physics and Informatics are emphasized during their elementary education. The two year long MSc course - in which about 15 to 25 students are admitted each year - can be studied only at the Eötvös Loránd University in our country. Our aim is to give a basic education in all fields of Meteorology: Climatology, Atmospheric Physics, Atmospheric Chemistry, Dynamic and Synoptic Meteorology, Numerical Weather Prediction, Modeling of Surface-atmosphere Interactions and Climate change. Education is performed in two branches: Climate Researcher and Forecaster
Experimental and numerical life prediction of thermally cycled thermal barrier coatings
NASA Astrophysics Data System (ADS)
Liu, Y.; Persson, C.; Wigren, J.
2004-09-01
This article addresses the predominant degradation modes and life prediction of a plasma-sprayed thermal barrier coating (TBC). The studied TBC system consists of an air-plasma-sprayed bond coat and an air-plasma-sprayed, yttria partially stabilized zirconia top layer on a conventional Hastelloy X substrate. Thermal shock tests of as-sprayed TBC and pre-oxidized TBC specimens were conducted under different burner flame conditions at Volvo Aero Corporation (Trollhättan, Sweden). Finite element models were used to simulate the thermal shock tests. Transient temperature distributions and thermal mismatch stresses in different layers of the coatings during thermal cycling were calculated. The roughness of the interface between the ceramic top coat and the bond coat was modeled through an ideally sinusoidal wavy surface. Bond coat oxidation was simulated through adding an aluminum oxide layer between the ceramic top coat and the bond coat. The calculated stresses indicated that interfacial delamination cracks, initiated in the ceramic top coat at the peak of the asperity of the interface, together with surface cracking, are the main reasons for coating failure. A phenomenological life prediction model for the coating was proposed. This model is accurate within a factor of 3.
Zhu, Hongjun; Feng, Guang; Wang, Qijun
2014-01-01
Accurate prediction of erosion thickness is essential for pipe engineering. The objective of the present paper is to study the temperature distribution in an eroded bend pipe and find a new method to predict the erosion reduced thickness. Computational fluid dynamic (CFD) simulations with FLUENT software are carried out to investigate the temperature field. And effects of oil inlet rate, oil inlet temperature, and erosion reduced thickness are examined. The presence of erosion pit brings about the obvious fluctuation of temperature drop along the extrados of bend. And the minimum temperature drop presents at the most severe erosion point. Small inlet temperature or large inlet velocity can lead to small temperature drop, while shallow erosion pit causes great temperature drop. The dimensionless minimum temperature drop is analyzed and the fitting formula is obtained. Using the formula we can calculate the erosion reduced thickness, which is only needed to monitor the outer surface temperature of bend pipe. This new method can provide useful guidance for pipeline monitoring and replacement.
NASA Astrophysics Data System (ADS)
Yusman, W.; Viridi, S.; Rachmat, S.
2016-01-01
The non-discharges geothermal wells have been a main problem in geothermal development stages and well discharge stimulation is required to initiate a flow. Air compress stimulation is one of the methods to trigger a fluid flow from the geothermal reservoir. The result of this process can be predicted by using by the Af / Ac method, but sometimes this method shows uncertainty result in several geothermal wells and also this prediction method does not take into account the flowing time of geothermal fluid to discharge after opening the well head. This paper presents a simulation of non-discharges well under air compress stimulation to predict well behavior and time process required. The component of this model consists of geothermal well data during heating-up process such as pressure, temperature and mass flow in the water column and main feed zone level. The one-dimensional transient numerical model is run based on the Single Fluid Volume Element (SFVE) method. According to the simulation result, the geothermal well behavior prediction after air compress stimulation will be valid under two specific circumstances, such as single phase fluid density between 1 - 28 kg/m3 and above 28.5 kg/m3. The first condition shows that successful well discharge and the last condition represent failed well discharge after air compress stimulation (only for two wells data). The comparison of pf values between simulation and field observation shows the different result according to the success discharge well. Time required for flow to occur as observed in well head by using the SFVE method is different with the actual field condition. This model needs to improve by updating more geothermal well data and modified fluid phase condition inside the wellbore.
Yuan, Xuye; Chen, Jiajia; Lin, Yuxin; Li, Yin; Xu, Lihua; Chen, Luonan; Hua, Haiying; Shen, Bairong
2017-01-01
Leukemia is a leading cause of cancer deaths in the developed countries. Great efforts have been undertaken in search of diagnostic biomarkers of leukemia. However, leukemia is highly complex and heterogeneous, involving interaction among multiple molecular components. Individual molecules are not necessarily sensitive diagnostic indicators. Network biomarkers are considered to outperform individual molecules in disease characterization. We applied an integrative approach that identifies active network modules as putative biomarkers for leukemia diagnosis. We first reconstructed the leukemia-specific PPI network using protein-protein interactions from the Protein Interaction Network Analysis (PINA) and protein annotations from GeneGo. The network was further integrated with gene expression profiles to identify active modules with leukemia relevance. Finally, the candidate network-based biomarker was evaluated for the diagnosing performance. A network of 97 genes and 400 interactions was identified for accurate diagnosis of leukemia. Functional enrichment analysis revealed that the network biomarkers were enriched in pathways in cancer. The network biomarkers could discriminate leukemia samples from the normal controls more effectively than the known biomarkers. The network biomarkers provide a useful tool to diagnose leukemia and also aids in further understanding the molecular basis of leukemia. PMID:28243332
Fujii, Masafumi; Freude, Wolfgang; Leuthold, Juerg
2008-12-08
Sub-diffraction-limit imaging by the surface plasmon polariton (SPP) induced in thin metal film lenses has been analyzed numerically. The SPP images are deteriorated by interference of plasmon fields in layered metal-dielectric structures. To obtain a clear imaging capability, the reflection and the transmission property of evanescent waves in the layered structures has been investigated by the finite-difference time-domain (FDTD) method. For verification, a full 3-dimensional analysis of large-scale layered structures demonstrated sub-wavelength images similar to those obtained in the recently reported experiments. The analysis has been extended further to a lithography of nano-scale images to predict the minimum possible size of the images resolved by the silver thin film lenses.
Defect reaction network in Si-doped InP : numerical predictions.
Schultz, Peter Andrew
2013-10-01
This Report characterizes the defects in the defect reaction network in silicon-doped, n-type InP deduced from first principles density functional theory. The reaction network is deduced by following exothermic defect reactions starting with the initially mobile interstitial defects reacting with common displacement damage defects in Si-doped InP until culminating in immobile reaction products. The defect reactions and reaction energies are tabulated, along with the properties of all the silicon-related defects in the reaction network. This Report serves to extend the results for intrinsic defects in SAND 2012-3313: %E2%80%9CSimple intrinsic defects in InP: Numerical predictions%E2%80%9D to include Si-containing simple defects likely to be present in a radiation-induced defect reaction sequence.
NASA Astrophysics Data System (ADS)
Hales, Joel M.; Khachatrian, Ani; Roche, Nicolas J.; Buchner, Stephen; Warner, Jeffrey; McMorrow, Dale
2016-05-01
Two numerical approaches for determining the charge generated in semiconductors via two-photon absorption (2PA) under conditions relevant for laser-based single-event effects (SEE) experiments are presented. The first approach uses a simple analytical expression incorporating a small number of experimental/material parameters while the second approach employs a comprehensive beam propagation method that accounts for all the complex nonlinear optical (NLO) interactions present. The impact of the excitation conditions, device geometry, and specific NLO interactions on the resulting collected charge in silicon devices is also discussed. These approaches can provide value to the radiation-effects community by predicting the impacts that varying experimental parameters will have on 2PA SEE measurements.
A numerical tool for reproducing driver behaviour: experiments and predictive simulations.
Casucci, M; Marchitto, M; Cacciabue, P C
2010-03-01
This paper presents the simulation tool called SDDRIVE (Simple Simulation of Driver performance), which is the numerical computerised implementation of the theoretical architecture describing Driver-Vehicle-Environment (DVE) interactions, contained in Cacciabue and Carsten [Cacciabue, P.C., Carsten, O. A simple model of driver behaviour to sustain design and safety assessment of automated systems in automotive environments, 2010]. Following a brief description of the basic algorithms that simulate the performance of drivers, the paper presents and discusses a set of experiments carried out in a Virtual Reality full scale simulator for validating the simulation. Then the predictive potentiality of the tool is shown by discussing two case studies of DVE interactions, performed in the presence of different driver attitudes in similar traffic conditions.
Numerical prediction of marine propeller noise in non-uniform inflow
NASA Astrophysics Data System (ADS)
Pan, Yu-cun; Zhang, Huai-xin
2013-03-01
A numerical study on the acoustic radiation of a propeller interacting with non-uniform inflow has been conducted. Real geometry of a marine propeller DTMB 4118 is used in the calculation, and sliding mesh technique is adopted to deal with the rotational motion of the propeller. The performance of the DES (Detached Eddy Simulation) approach at capturing the unsteady forces and moments on the propeller is compared with experiment. Far-field sound radiation is predicted by the formation 1A developed by Farassat, an integral solution of FW-H (Ffowcs Williams-Hawkings) equation in time domain. The sound pressure and directivity patterns of the propeller operating in two specific velocity distributions are discussed.
Bonfiglio, Paolo; Pompoli, Francesco
2013-07-01
This paper presents a description of the use of simplified numerical methodologies for the optimization of the low cut-off frequency of anechoic and hemi-anechoic chambers. The anechoic chamber is modeled as a cavity with proper surface impedance boundary conditions. First, the shape of the wedges is optimized by means of a minimization-based procedure of a finite element model of such elements in a "virtual" impedance tube for a plane wave field. An equivalent surface impedance of the wedges is determined from those data. An analytical procedure is then used to determine the complex reflection coefficient for spherical waves at oblique incidence. Finally, a complex image source approach is used to predict the sound field within the chamber. The methodology is applied to two anechoic chambers and the results are compared in terms of sound decay along fixed directions and surface pressure distributions.
NASA Technical Reports Server (NTRS)
Cohn, S. E.
1982-01-01
Numerical weather prediction (NWP) is an initial-value problem for a system of nonlinear differential equations, in which initial values are known incompletely and inaccurately. Observational data available at the initial time must therefore be supplemented by data available prior to the initial time, a problem known as meteorological data assimilation. A further complication in NWP is that solutions of the governing equations evolve on two different time scales, a fast one and a slow one, whereas fast scale motions in the atmosphere are not reliably observed. This leads to the so called initialization problem: initial values must be constrained to result in a slowly evolving forecast. The theory of estimation of stochastic dynamic systems provides a natural approach to such problems. For linear stochastic dynamic models, the Kalman-Bucy (KB) sequential filter is the optimal data assimilation method, for linear models, the optimal combined data assimilation-initialization method is a modified version of the KB filter.
Numerical prediction of pressure fluctuations in a prototype pump turbine base on PANS methods
NASA Astrophysics Data System (ADS)
Liu, J. T.; Li, Y.; Gao, Y.; Hu, Q.; Wu, Y. L.
2016-05-01
Unsteady flow and pressure fluctuations within a prototypel pump turbine are numerically studied using a nonlinear Partial Averaged Navier Stokes (PANS) model. Pump turbine operating at different conditions with guide vanes opening angle 6° is simulated. Results revealed that the predictions of performance and relative peak-to-peak amplitude by PANS approach agree well with the experimental data. The amplitude of the pressure fluctuation in the vaneless space at turbine mode on a “S” curve increases with the decrease of the flow rate, and it has maximum value when it runs close to runaway line at turbine braking mode. The amplitude of the pressure fluctuation in the vaneless space at turbine braking mode on a “S” curve decreases with the reduce of the flow rate. The above high pressure fluctuations should be avoided during the design of pump turbines especially those operating at high-head condition.
Walsh, Susan; Liu, Fan; Ballantyne, Kaye N; van Oven, Mannis; Lao, Oscar; Kayser, Manfred
2011-06-01
A new era of 'DNA intelligence' is arriving in forensic biology, due to the impending ability to predict externally visible characteristics (EVCs) from biological material such as those found at crime scenes. EVC prediction from forensic samples, or from body parts, is expected to help concentrate police investigations towards finding unknown individuals, at times when conventional DNA profiling fails to provide informative leads. Here we present a robust and sensitive tool, termed IrisPlex, for the accurate prediction of blue and brown eye colour from DNA in future forensic applications. We used the six currently most eye colour-informative single nucleotide polymorphisms (SNPs) that previously revealed prevalence-adjusted prediction accuracies of over 90% for blue and brown eye colour in 6168 Dutch Europeans. The single multiplex assay, based on SNaPshot chemistry and capillary electrophoresis, both widely used in forensic laboratories, displays high levels of genotyping sensitivity with complete profiles generated from as little as 31pg of DNA, approximately six human diploid cell equivalents. We also present a prediction model to correctly classify an individual's eye colour, via probability estimation solely based on DNA data, and illustrate the accuracy of the developed prediction test on 40 individuals from various geographic origins. Moreover, we obtained insights into the worldwide allele distribution of these six SNPs using the HGDP-CEPH samples of 51 populations. Eye colour prediction analyses from HGDP-CEPH samples provide evidence that the test and model presented here perform reliably without prior ancestry information, although future worldwide genotype and phenotype data shall confirm this notion. As our IrisPlex eye colour prediction test is capable of immediate implementation in forensic casework, it represents one of the first steps forward in the creation of a fully individualised EVC prediction system for future use in forensic DNA intelligence.
Numerical prediction of ion current from a small methane jet flame
Yamashita, Kiyotaka; Karnani, Sunny; Dunn-Rankin, Derek
2009-06-15
This paper compares numerical simulations with experiments to describe the underlying mechanisms responsible for the voltage-current characteristic (VCC) response of a capillary-fed methane diffusion flame in an electric field. The numerical simulations, which include both combustion and electric phenomena, show good agreement with previously published experimental results, though computed flame temperatures were higher than those experimentally measured. Sub-saturated, saturated and super-saturated ion current regions are shown in the range of applied potentials: 0-2.5 kV, 2.5-3.4 kV and over 3.4 kV, respectively. The transition between the sub-saturated and saturated region is explored by predicting an evolution of the H{sub 3}O{sup +} ion profile. Furthermore, the transition between the saturated and super-saturated current region is considered by following the rate of ion production. The simulations show an enhancement of ion production at high voltages suggesting that the main factor behind the increased ion production is the entrainment of air by the ion-driven wind into the fuel jet before the reaction zone, producing a partially premixed flame. Additionally, the change in chemical reaction pathways as a result of air entrainment is discussed. (author)
Verification of Numerical Weather Prediction Model Results for Energy Applications in Latvia
NASA Astrophysics Data System (ADS)
Sīle, Tija; Cepite-Frisfelde, Daiga; Sennikovs, Juris; Bethers, Uldis
2014-05-01
A resolution to increase the production and consumption of renewable energy has been made by EU governments. Most of the renewable energy in Latvia is produced by Hydroelectric Power Plants (HPP), followed by bio-gas, wind power and bio-mass energy production. Wind and HPP power production is sensitive to meteorological conditions. Currently the basis of weather forecasting is Numerical Weather Prediction (NWP) models. There are numerous methodologies concerning the evaluation of quality of NWP results (Wilks 2011) and their application can be conditional on the forecast end user. The goal of this study is to evaluate the performance of Weather Research and Forecast model (Skamarock 2008) implementation over the territory of Latvia, focusing on forecasting of wind speed and quantitative precipitation forecasts. The target spatial resolution is 3 km. Observational data from Latvian Environment, Geology and Meteorology Centre are used. A number of standard verification metrics are calculated. The sensitivity to the model output interpretation (output spatial interpolation versus nearest gridpoint) is investigated. For the precipitation verification the dichotomous verification metrics are used. Sensitivity to different precipitation accumulation intervals is examined. Skamarock, William C. and Klemp, Joseph B. A time-split nonhydrostatic atmospheric model for weather research and forecasting applications. Journal of Computational Physics. 227, 2008, pp. 3465-3485. Wilks, Daniel S. Statistical Methods in the Atmospheric Sciences. Third Edition. Academic Press, 2011.
NASA Astrophysics Data System (ADS)
Remy, Samuel; Benedetti, Angela; Jones, Luke; Razinger, Miha; Haiden, Thomas
2014-05-01
The WMO-sponsored Working Group on Numerical Experimentation (WGNE) set up a project aimed at understanding the importance of aerosols for numerical weather prediction (NWP). Three cases are being investigated by several NWP centres with aerosol capabilities: a severe dust case that affected Southern Europe in April 2012, a biomass burning case in South America in September 2012, and an extreme pollution event in Beijing (China) which took place in January 2013. At ECMWF these cases are being studied using the MACC-II system with radiatively interactive aerosols. Some preliminary results related to the dust and the fire event will be presented here. A preliminary verification of the impact of the aerosol-radiation direct interaction on surface meteorological parameters such as 2m Temperature and surface winds over the region of interest will be presented. Aerosol optical depth (AOD) verification using AERONET data will also be discussed. For the biomass burning case, the impact of using injection heights estimated by a Plume Rise Model (PRM) for the biomass burning emissions will be presented.
Libertus, Melissa E; Feigenson, Lisa; Halberda, Justin
2013-12-01
Previous research has found a relationship between individual differences in children's precision when nonverbally approximating quantities and their school mathematics performance. School mathematics performance emerges from both informal (e.g., counting) and formal (e.g., knowledge of mathematics facts) abilities. It remains unknown whether approximation precision relates to both of these types of mathematics abilities. In the current study, we assessed the precision of numerical approximation in 85 3- to 7-year-old children four times over a span of 2years. In addition, at the final time point, we tested children's informal and formal mathematics abilities using the Test of Early Mathematics Ability (TEMA-3). We found that children's numerical approximation precision correlated with and predicted their informal, but not formal, mathematics abilities when controlling for age and IQ. These results add to our growing understanding of the relationship between an unlearned nonsymbolic system of quantity representation and the system of mathematics reasoning that children come to master through instruction.
McCoy, Rajiv C.; Garud, Nandita R.; Kelley, Joanna L.; Boggs, Carol L.; Petrov, Dmitri A.
2015-01-01
The analysis of molecular data from natural populations has allowed researchers to answer diverse ecological questions that were previously intractable. In particular, ecologists are often interested in the demographic history of populations, information that is rarely available from historical records. Methods have been developed to infer demographic parameters from genomic data, but it is not well understood how inferred parameters compare to true population history or depend on aspects of experimental design. Here we present and evaluate a method of SNP discovery using RNA-sequencing and demographic inference using the program δaδi, which uses a diffusion approximation to the allele frequency spectrum to fit demographic models. We test these methods in a population of the checkerspot butterfly Euphydryas gillettii. This population was intentionally introduced to Gothic, Colorado in 1977 and has since experienced extreme fluctuations including bottlenecks of fewer than 25 adults, as documented by nearly annual field surveys. Using RNA-sequencing of eight individuals from Colorado and eight individuals from a native population in Wyoming, we generate the first genomic resources for this system. While demographic inference is commonly used to examine ancient demography, our study demonstrates that our inexpensive, all-in-one approach to marker discovery and genotyping provides sufficient data to accurately infer the timing of a recent bottleneck. This demographic scenario is relevant for many species of conservation concern, few of which have sequenced genomes. Our results are remarkably insensitive to sample size or number of genomic markers, which has important implications for applying this method to other non-model systems. PMID:24237665
NASA Astrophysics Data System (ADS)
Ostoich, Christopher Mark
due to a dome-induced horseshoe vortex scouring the panel's surface. Comparisons with reduced-order models of heat transfer indicate that they perform with varying levels of accuracy around some portions of the geometry while completely failing to predict significant heat loads in re- gions where the dome-influenced flow impacts the ceramic panel. Cumulative effects of flow-thermal coupling at later simulation times on the reduction of panel drag and surface heat transfer are quantified. The second fluid-structure study investigates the interaction between a thin metallic panel and a Mach 2.25 turbulent boundary layer with an ini- tial momentum thickness Reynolds number of 1200. A transient, non-linear, large deformation, 3D finite element solver is developed to compute the dynamic response of the panel. The solver is coupled at the fluid-structure interface with the compressible Navier-Stokes solver, the latter of which is used for a direct numerical simulation of the turbulent boundary layer. In this approach, no simplifying assumptions regarding the structural solution or turbulence modeling are made in order to get detailed solution data. It is found that the thin panel state evolves into a flutter type response char- acterized by high-amplitude, high-frequency oscillations into the flow. The oscillating panel disturbs the supersonic flow by introducing compression waves, modifying the turbulence, and generating fluctuations in the power exiting the top of the flow domain. The work in this thesis serves as a step forward in structural response prediction in high-speed flows. The results demonstrate the ability of high- fidelity numerical approaches to serve as a guide for reduced-order model improvement and as well as provide accurate and detailed solution data in scenarios where experimental approaches are difficult or impossible.
Samudrala, Ram; Heffron, Fred; McDermott, Jason E.
2009-04-24
The type III secretion system is an essential component for virulence in many Gram-negative bacteria. Though components of the secretion system apparatus are conserved, its substrates, effector proteins, are not. We have used a machine learning approach to identify new secreted effectors. The method integrates evolutionary measures, such as the pattern of homologs in a range of other organisms, and sequence-based features, such as G+C content, amino acid composition and the N-terminal 30 residues of the protein sequence. The method was trained on known effectors from Salmonella typhimurium and validated on a corresponding set of effectors from Pseudomonas syringae, after eliminating effectors with detectable sequence similarity. The method was able to identify all of the known effectors in P. syringae with a specificity of 84% and sensitivity of 82%. The reciprocal validation, training on P. syringae and validating on S. typhimurium, gave similar results with a specificity of 86% when the sensitivity level was 87%. These results show that type III effectors in disparate organisms share common features. We found that maximal performance is attained by including an N-terminal sequence of only 30 residues, which agrees with previous studies indicating that this region contains the secretion signal. We then used the method to define the most important residues in this putative secretion signal. Finally, we present novel predictions of secreted effectors in S. typhimurium, some of which have been experimentally validated, and apply the method to predict secreted effectors in the genetically intractable human pathogen Chlamydia trachomatis. This approach is a novel and effective way to identify secreted effectors in a broad range of pathogenic bacteria for further experimental characterization and provides insight into the nature of the type III secretion signal.
Deleebeeck, Nele M E; De Laender, Frederik; Chepurnov, Victor A; Vyverman, Wim; Janssen, Colin R; De Schamphelaere, Karel A C
2009-04-01
The major research questions addressed in this study were (i) whether green microalgae living in soft water (operationally defined water hardness<10mg CaCO(3)/L) are intrinsically more sensitive to Ni than green microalgae living in hard water (operationally defined water hardness >25mg CaCO(3)/L), and (ii) whether a single bioavailability model can be used to predict the effect of water hardness on the toxicity of Ni to green microalgae in both soft and hard water. Algal growth inhibition tests were conducted with clones of 10 different species collected in soft and hard water lakes in Sweden. Soft water algae were tested in a 'soft' and a 'moderately hard' test medium (nominal water hardness=6.25 and 16.3mg CaCO(3)/L, respectively), whereas hard water algae were tested in a 'moderately hard' and a 'hard' test medium (nominal water hardness=16.3 and 43.4 mg CaCO(3)/L, respectively). The results from the growth inhibition tests in the 'moderately hard' test medium revealed no significant sensitivity differences between the soft and the hard water algae used in this study. Increasing water hardness significantly reduced Ni toxicity to both soft and hard water algae. Because it has previously been demonstrated that Ca does not significantly protect the unicellular green alga Pseudokirchneriella subcapitata against Ni toxicity, it was assumed that the protective effect of water hardness can be ascribed to Mg alone. The logK(MgBL) (=5.5) was calculated to be identical for the soft and the hard water algae used in this study. A single bioavailability model can therefore be used to predict Ni toxicity to green microalgae in soft and hard surface waters as a function of water hardness.
Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz
2014-01-01
Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs. PMID:25024412
Numerical prediction of flow induced fibers orientation in injection molded polymer composites
NASA Astrophysics Data System (ADS)
Oumer, A. N.; Hamidi, N. M.; Mat Sahat, I.
2015-12-01
Since the filling stage of injection molding process has important effect on the determination of the orientation state of the fibers, accurate analysis of the flow field for the mold filling stage becomes a necessity. The aim of the paper is to characterize the flow induced orientation state of short fibers in injection molding cavities. A dog-bone shaped model is considered for the simulation and experiment. The numerical model for determination of the fibers orientation during mold-filling stage of injection molding process was solved using Computational Fluid Dynamics (CFD) software called MoldFlow. Both the simulation and experimental results showed that two different regions (or three layers of orientation structures) across the thickness of the specimen could be found: a shell region which is near to the mold cavity wall, and a core region at the middle of the cross section. The simulation results support the experimental observations that for thin plates the probability of fiber alignment to the flow direction near the mold cavity walls is high but low at the core region. It is apparent that the results of this study could assist in decisions regarding short fiber reinforced polymer composites.
Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz
2014-08-13
Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs.
Real Time Numerical Weather Prediction by The Florida State University Superensemble
NASA Astrophysics Data System (ADS)
Ross, R. S.; Krishnamurti, T. N.
2005-05-01
The Florida State University (FSU) Superensemble technique as applied to real-time numerical weather prediction will be described. An evaluation of the skill of the Superensemble forecasts will be presented in comparison to the skills of the seven global numerical weather prediction models that comprise the Superensemble. Forecast variables that will be examined include lower and upper tropospheric wind fields, mean sea level pressure, mid-tropospheric geopotential height, and precipitation. Forecast skill will be evaluated globally, as well as for a number of sub-regions, such as the Indian monsoon region, North and South America, and the tropical North Atlantic Ocean. Statistical measures of forecast skill will include root mean square error, anomaly correlation, and systematic error for most variables. Forecast precipitation will also be evaluated by use of correlation, bias, and equitable threat scores. The skill scores will be presented the years 2000, 2001, and 2004. The FSU Superensemble technique uses multiple linear regression to derive coefficients from a comparison of member model forecasts to a benchmark analysis during a training period of 120 days. This procedure removes the bias of each individual forecast model and allows for an optimal linear combination of the individual model forecasts, which takes into account the relative skill of each model. The result is a forecast that has greater skill than the individual model forecasts and the ensemble mean forecast. The real-time FSU Superensemble forecasts are available on a website that shows the forecasts for the entire globe, as well as for ten sub-regions of the world. The website has links to the skill scores that are routinely updated, as well as to a number of journal articles that describe the FSU Superensemble technique in detail. Overall, the FSU Superensemble has been shown to be a valuable tool for significantly improving upon the numerical model forecasts emanating from the world
NASA Astrophysics Data System (ADS)
Hoshiba, M.; Ogiso, M.
2015-12-01
In many methods of the present EEW systems, hypocenter and magnitude are determined quickly, and then the strengths of ground motions are predicted using the hypocentral distance and magnitude based on a ground motion prediction equation (GMPE), which usually leads the prediction of concentric distribution. However, actual ground shaking is not always concentric, even when site amplification is corrected. At a common site, the strengths of shaking may be much different among earthquakes even when their hypocentral distances and magnitudes are almost the same. For some cases, PGA differs more than 10 times, which leads to imprecise prediction in EEW. Recently, Numerical Shake Prediction method was proposed (Hoshiba and Aoki, 2015), in which the present ongoing wavefield of ground shaking is estimated using data assimilation technique, and then future wavefield is predicted based on physics of wave propagation. Information of hypocentral location and magnitude is not required in this method. Because future is predicted from the present condition, it is possible to address the issue of the non-concentric distribution. Once the deviated distribution is actually observed in ongoing wavefield, future distribution is predicted accordingly to be non-concentric. We will indicate examples of M6-class earthquakes occurred at central Japan, in which strengths of shaking were observed to non-concentrically distribute. We will show their predictions using Numerical Shake Prediction method. The deviated distribution may be explained by inhomogeneous distribution of attenuation. Even without attenuation structure, it is possible to address the issue of non-concentric distribution to some extent once the deviated distribution is actually observed in ongoing wavefield. If attenuation structure is introduced, we can predict it before actual observation. The information of attenuation structure leads to more precise and rapid prediction in Numerical Shake Prediction method for EEW.
Numerical Prediction of the Thermodynamic Properties of Ternary Al-Ni-Pd Alloys
NASA Astrophysics Data System (ADS)
Zagula-Yavorska, Maryana; Romanowska, Jolanta; Kotowski, Sławomir; Sieniawski, Jan
2016-01-01
Thermodynamic properties of ternary Al-Ni-Pd system, such as exGAlNPd, µAl(AlNiPd), µNi(AlNiPd) and µPd(AlNiPd) at 1,373 K, were predicted on the basis of thermodynamic properties of binary systems included in the investigated ternary system. The idea of predicting exGAlNiPd values was regarded as calculation of values of the exG function inside a certain area (a Gibbs triangle) unless all boundary conditions, that is values of exG on all legs of the triangle are known (exGAlNi, exGAlPd, exGNiPd). This approach is contrary to finding a function value outside a certain area, if the function value inside this area is known. exG and LAl,Ni,Pd ternary interaction parameters in the Muggianu extension of the Redlich-Kister formalism were calculated numerically using the Excel program and Solver. The accepted values of the third component xx differed from 0.01 to 0.1 mole fraction. Values of LAlNiPd parameters in the Redlich-Kister formula are different for different xx values, but values of thermodynamic functions: exGAlNiPd, µAl(AlNiPd), µNi(AlNiPd) and µPd(AlNiPd) do not differ significantly for different xx values. The choice of xx value does not influence the accuracy of calculations.
NASA Astrophysics Data System (ADS)
Tan, Samuel; Barrera Acevedo, Santiago; Izgorodina, Ekaterina I.
2017-02-01
The accurate calculation of intermolecular interactions is important to our understanding of properties in large molecular systems. The high computational cost of the current "gold standard" method, coupled cluster with singles and doubles and perturbative triples (CCSD(T), limits its application to small- to medium-sized systems. Second-order Møller-Plesset perturbation (MP2) theory is a cheaper alternative for larger systems, although at the expense of its decreased accuracy, especially when treating van der Waals complexes. In this study, a new modification of the spin-component scaled MP2 method was proposed for a wide range of intermolecular complexes including two well-known datasets, S22 and S66, and a large dataset of ionic liquids consisting of 174 single ion pairs, IL174. It was found that the spin ratio, ɛΔ s=E/INT O SEIN T S S , calculated as the ratio of the opposite-spin component to the same-spin component of the interaction correlation energy fell in the range of 0.1 and 1.6, in contrast to the range of 3-4 usually observed for the ratio of absolute correlation energy, ɛs=E/OSES S , in individual molecules. Scaled coefficients were found to become negative when the spin ratio fell in close proximity to 1.0, and therefore, the studied intermolecular complexes were divided into two groups: (1) complexes with ɛΔ s< 1 and (2) complexes with ɛΔ s≥ 1 . A separate set of coefficients was obtained for both groups. Exclusion of counterpoise correction during scaling was found to produce superior results due to decreased error. Among a series of Dunning's basis sets, cc-pVTZ and cc-pVQZ were found to be the best performing ones, with a mean absolute error of 1.4 kJ mol-1 and maximum errors below 6.2 kJ mol-1. The new modification, spin-ratio scaled second-order Møller-Plesset perturbation, treats both dispersion-driven and hydrogen-bonded complexes equally well, thus validating its robustness with respect to the interaction type ranging from ionic
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.; Conrad, Joy
1996-01-01
The geomagnetic spatial power spectrum R(sub n)(r) is the mean square magnetic induction represented by degree n spherical harmonic coefficients of the internal scalar potential averaged over the geocentric sphere of radius r. McLeod's Rule for the magnetic field generated by Earth's core geodynamo says that the expected core surface power spectrum (R(sub nc)(c)) is inversely proportional to (2n + 1) for 1 less than n less than or equal to N(sub E). McLeod's Rule is verified by locating Earth's core with main field models of Magsat data; the estimated core radius of 3485 kn is close to the seismologic value for c of 3480 km. McLeod's Rule and similar forms are then calibrated with the model values of R(sub n) for 3 less than or = n less than or = 12. Extrapolation to the degree 1 dipole predicts the expectation value of Earth's dipole moment to be about 5.89 x 10(exp 22) Am(exp 2)rms (74.5% of the 1980 value) and the expected geomagnetic intensity to be about 35.6 (mu)T rms at Earth's surface. Archeo- and paleomagnetic field intensity data show these and related predictions to be reasonably accurate. The probability distribution chi(exp 2) with 2n+1 degrees of freedom is assigned to (2n + 1)R(sub nc)/(R(sub nc). Extending this to the dipole implies that an exceptionally weak absolute dipole moment (less than or = 20% of the 1980 value) will exist during 2.5% of geologic time. The mean duration for such major geomagnetic dipole power excursions, one quarter of which feature durable axial dipole reversal, is estimated from the modern dipole power time-scale and the statistical model of excursions. The resulting mean excursion duration of 2767 years forces us to predict an average of 9.04 excursions per million years, 2.26 axial dipole reversals per million years, and a mean reversal duration of 5533 years. Paleomagnetic data show these predictions to be quite accurate. McLeod's Rule led to accurate predictions of Earth's core radius, mean paleomagnetic field
Magozzi, Sarah; Calosi, Piero
2015-01-01
Predicting species vulnerability to global warming requires a comprehensive, mechanistic understanding of sublethal and lethal thermal tolerances. To date, however, most studies investigating species physiological responses to increasing temperature have focused on the underlying physiological traits of either acute or chronic tolerance in isolation. Here we propose an integrative, synthetic approach including the investigation of multiple physiological traits (metabolic performance and thermal tolerance), and their plasticity, to provide more accurate and balanced predictions on species and assemblage vulnerability to both acute and chronic effects of global warming. We applied this approach to more accurately elucidate relative species vulnerability to warming within an assemblage of six caridean prawns occurring in the same geographic, hence macroclimatic, region, but living in different thermal habitats. Prawns were exposed to four incubation temperatures (10, 15, 20 and 25 °C) for 7 days, their metabolic rates and upper thermal limits were measured, and plasticity was calculated according to the concept of Reaction Norms, as well as Q10 for metabolism. Compared to species occupying narrower/more stable thermal niches, species inhabiting broader/more variable thermal environments (including the invasive Palaemon macrodactylus) are likely to be less vulnerable to extreme acute thermal events as a result of their higher upper thermal limits. Nevertheless, they may be at greater risk from chronic exposure to warming due to the greater metabolic costs they incur. Indeed, a trade-off between acute and chronic tolerance was apparent in the assemblage investigated. However, the invasive species P. macrodactylus represents an exception to this pattern, showing elevated thermal limits and plasticity of these limits, as well as a high metabolic control. In general, integrating multiple proxies for species physiological acute and chronic responses to increasing
ERIC Educational Resources Information Center
Powell, Erica Dion
2013-01-01
This study presents a survey developed to measure the skills of entering college freshmen in the areas of responsibility, motivation, study habits, literacy, and stress management, and explores the predictive power of this survey as a measure of academic performance during the first semester of college. The survey was completed by 334 incoming…
Vlaming, M L H; van Duijn, E; Dillingh, M R; Brands, R; Windhorst, A D; Hendrikse, N H; Bosgra, S; Burggraaf, J; de Koning, M C; Fidder, A; Mocking, J A J; Sandman, H; de Ligt, R A F; Fabriek, B O; Pasman, W J; Seinen, W; Alves, T; Carrondo, M; Peixoto, C; Peeters, P A M; Vaes, W H J
2015-08-01
Preclinical development of new biological entities (NBEs), such as human protein therapeutics, requires considerable expenditure of time and costs. Poor prediction of pharmacokinetics in humans further reduces net efficiency. In this study, we show for the first time that pharmacokinetic data of NBEs in humans can be successfully obtained early in the drug development process by the use of microdosing in a small group of healthy subjects combined with ultrasensitive accelerator mass spectrometry (AMS). After only minimal preclinical testing, we performed a first-in-human phase 0/phase 1 trial with a human recombinant therapeutic protein (RESCuing Alkaline Phosphatase, human recombinant placental alkaline phosphatase [hRESCAP]) to assess its safety and kinetics. Pharmacokinetic analysis showed dose linearity from microdose (53 μg) [(14) C]-hRESCAP to therapeutic doses (up to 5.3 mg) of the protein in healthy volunteers. This study demonstrates the value of a microdosing approach in a very small cohort for accelerating the clinical development of NBEs.
Kato, Keiichi; Ueno, Satoshi; Yabuuchi, Akiko; Uchiyama, Kazuo; Okuno, Takashi; Kobayashi, Tamotsu; Segawa, Tomoya; Teramoto, Shokichi
2014-10-01
The aim of this study was to establish a simple, objective blastocyst grading system using women's age and embryo developmental speed to predict clinical pregnancy after single vitrified-warmed blastocyst transfer. A 6-year retrospective cohort study was conducted in a private infertility centre. A total of 7341 single vitrified-armed blastocyst transfer cycles were included, divided into those carried out between 2006 and 2011 (6046 cycles) and 2012 (1295 cycles). Clinical pregnancy rate, ongoing pregnancy rate and delivery rates were stratified by women's age (<35, 35-37, 38-39, 40-41, 42-45 years) and time to blastocyst expansion (<120, 120-129, 130-139, 140-149, >149 h) as embryo developmental speed. In all the age groups, clinical pregnancy rate, ongoing pregnancy rate and delivery rates decreased as the embryo developmental speed decreased (P < 0.0001). A simple five-grade score based on women's age and embryo developmental speed was determined by actual clinical pregnancy rates observed in the 2006-2011 cohort. Subsequently, the novel grading score was validated in the 2012 cohort (1295 cycles), finding an excellent association. In conclusion, we established a novel blastocyst grading system using women's age and embryo developmental speed as objective parameters.
Essaghir, Ahmed; Toffalini, Federica; Knoops, Laurent; Kallin, Anders; van Helden, Jacques; Demoulin, Jean-Baptiste
2010-01-01
Deciphering transcription factor networks from microarray data remains difficult. This study presents a simple method to infer the regulation of transcription factors from microarray data based on well-characterized target genes. We generated a catalog containing transcription factors associated with 2720 target genes and 6401 experimentally validated regulations. When it was available, a distinction between transcriptional activation and inhibition was included for each regulation. Next, we built a tool (www.tfacts.org) that compares submitted gene lists with target genes in the catalog to detect regulated transcription factors. TFactS was validated with published lists of regulated genes in various models and compared to tools based on in silico promoter analysis. We next analyzed the NCI60 cancer microarray data set and showed the regulation of SOX10, MITF and JUN in melanomas. We then performed microarray experiments comparing gene expression response of human fibroblasts stimulated by different growth factors. TFactS predicted the specific activation of Signal transducer and activator of transcription factors by PDGF-BB, which was confirmed experimentally. Our results show that the expression levels of transcription factor target genes constitute a robust signature for transcription factor regulation, and can be efficiently used for microarray data mining. PMID:20215436
González, Lorenzo; Thorne, Leigh; Jeffrey, Martin; Martin, Stuart; Spiropoulos, John; Beck, Katy E; Lockey, Richard W; Vickery, Christopher M; Holder, Thomas; Terry, Linda
2012-11-01
It is widely accepted that abnormal forms of the prion protein (PrP) are the best surrogate marker for the infectious agent of prion diseases and, in practice, the detection of such disease-associated (PrP(d)) and/or protease-resistant (PrP(res)) forms of PrP is the cornerstone of diagnosis and surveillance of the transmissible spongiform encephalopathies (TSEs). Nevertheless, some studies question the consistent association between infectivity and abnormal PrP detection. To address this discrepancy, 11 brain samples of sheep affected with natural scrapie or experimental bovine spongiform encephalopathy were selected on the basis of the magnitude and predominant types of PrP(d) accumulation, as shown by immunohistochemical (IHC) examination; contra-lateral hemi-brain samples were inoculated at three different dilutions into transgenic mice overexpressing ovine PrP and were also subjected to quantitative analysis by three biochemical tests (BCTs). Six samples gave 'low' infectious titres (10⁶·⁵ to 10⁶·⁷ LD₅₀ g⁻¹) and five gave 'high titres' (10⁸·¹ to ≥ 10⁸·⁷ LD₅₀ g⁻¹) and, with the exception of the Western blot analysis, those two groups tended to correspond with samples with lower PrP(d)/PrP(res) results by IHC/BCTs. However, no statistical association could be confirmed due to high individual sample variability. It is concluded that although detection of abnormal forms of PrP by laboratory methods remains useful to confirm TSE infection, infectivity titres cannot be predicted from quantitative test results, at least for the TSE sources and host PRNP genotypes used in this study. Furthermore, the near inverse correlation between infectious titres and Western blot results (high protease pre-treatment) argues for a dissociation between infectivity and PrP(res).
Geng, Jiun-Hung; Tu, Hung-Pin; Shih, Paul Ming-Chen; Shen, Jung-Tsung; Jang, Mei-Yu; Wu, Wen-Jen; Li, Ching-Chia; Chou, Yii-Her; Juan, Yung-Shun
2015-01-01
Urolithiasis is a common disease of the urinary system. Extracorporeal shockwave lithotripsy (SWL) has become one of the standard treatments for renal and ureteral stones; however, the success rates range widely and failure of stone disintegration may cause additional outlay, alternative procedures, and even complications. We used the data available from noncontrast abdominal computed tomography (NCCT) to evaluate the impact of stone parameters and abdominal fat distribution on calculus-free rates following SWL. We retrospectively reviewed 328 patients who had urinary stones and had undergone SWL from August 2012 to August 2013. All of them received pre-SWL NCCT; 1 month after SWL, radiography was arranged to evaluate the condition of the fragments. These patients were classified into stone-free group and residual stone group. Unenhanced computed tomography variables, including stone attenuation, abdominal fat area, and skin-to-stone distance (SSD) were analyzed. In all, 197 (60%) were classified as stone-free and 132 (40%) as having residual stone. The mean ages were 49.35 ± 13.22 years and 55.32 ± 13.52 years, respectively. On univariate analysis, age, stone size, stone surface area, stone attenuation, SSD, total fat area (TFA), abdominal circumference, serum creatinine, and the severity of hydronephrosis revealed statistical significance between these two groups. From multivariate logistic regression analysis, the independent parameters impacting SWL outcomes were stone size, stone attenuation, TFA, and serum creatinine. [Adjusted odds ratios and (95% confidence intervals): 9.49 (3.72-24.20), 2.25 (1.22-4.14), 2.20 (1.10-4.40), and 2.89 (1.35-6.21) respectively, all p < 0.05]. In the present study, stone size, stone attenuation, TFA and serum creatinine were four independent predictors for stone-free rates after SWL. These findings suggest that pretreatment NCCT may predict the outcomes after SWL. Consequently, we can use these predictors for selecting
Narin, B; Ozyörük, Y; Ulas, A
2014-05-30
This paper describes a two-dimensional code developed for analyzing two-phase deflagration-to-detonation transition (DDT) phenomenon in granular, energetic, solid, explosive ingredients. The two-dimensional model is constructed in full two-phase, and based on a highly coupled system of partial differential equations involving basic flow conservation equations and some constitutive relations borrowed from some one-dimensional studies that appeared in open literature. The whole system is solved using an optimized high-order accurate, explicit, central-difference scheme with selective-filtering/shock capturing (SF-SC) technique, to augment central-diffencing and prevent excessive dispersion. The sources of the equations describing particle-gas interactions in terms of momentum and energy transfers make the equation system quite stiff, and hence its explicit integration difficult. To ease the difficulties, a time-split approach is used allowing higher time steps. In the paper, the physical model for the sources of the equation system is given for a typical explosive, and several numerical calculations are carried out to assess the developed code. Microscale intergranular and/or intragranular effects including pore collapse, sublimation, pyrolysis, etc. are not taken into account for ignition and growth, and a basic temperature switch is applied in calculations to control ignition in the explosive domain. Results for one-dimensional DDT phenomenon are in good agreement with experimental and computational results available in literature. A typical shaped-charge wave-shaper case study is also performed to test the two-dimensional features of the code and it is observed that results are in good agreement with those of commercial software.
NASA Astrophysics Data System (ADS)
Wu, Ming-Chang; Lin, Gwo-Fong
2017-03-01
During typhoons, accurate forecasts of rainfall are always desired for various kinds of disaster warning systems to reduce the impact of rainfall-induced disasters. However, rainfall forecasting, especially the very short-term (hourly) rainfall, is one of the most difficult tasks in hydrology due to the high variability in space and time and the complex physical process. In this study, the purpose is to provide effective forecasts of very short-term rainfall by means of the ensemble numerical weather prediction system in Taiwan. To this end, the ensemble forecasts of hourly rainfall from this ensemble numerical weather prediction system are analyzed to evaluate the performance. Furthermore, a methodology, which is based on the principle of analogue prediction, is proposed to effectively process these ensemble forecasts for improving the performance on very short-term rainfall forecasting. To clearly demonstrate the advantage of the proposed methodology, actual application is conducted on a mountainous watershed to yield 1- to 6-h ahead forecasts during typhoon events. The results indicate that the proposed methodology is better performed and more flexible than the conventional one. Generally, the proposed methodology provides improved performance for very short-term rainfall forecasting, especially for 1- to 2-h ahead forecasting. The improved forecasts provided by the proposed methodology are expected to be useful to support disaster warning systems, such as flash-flood, landslide, and debris flow warning systems, during typhoons.
Wang, Shiyao; Deng, Zhidong; Yin, Gang
2016-02-24
A high-performance differential global positioning system (GPS) receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.
Wang, Shiyao; Deng, Zhidong; Yin, Gang
2016-01-01
A high-performance differential global positioning system (GPS) receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108
Martin, Eric; Mukherjee, Prasenjit; Sullivan, David; Jansen, Johanna
2011-08-22
Profile-QSAR is a novel 2D predictive model building method for kinases. This "meta-QSAR" method models the activity of each compound against a new kinase target as a linear combination of its predicted activities against a large panel of 92 previously studied kinases comprised from 115 assays. Profile-QSAR starts with a sparse incomplete kinase by compound (KxC) activity matrix, used to generate Bayesian QSAR models for the 92 "basis-set" kinases. These Bayesian QSARs generate a complete "synthetic" KxC activity matrix of predictions. These synthetic activities are used as "chemical descriptors" to train partial-least squares (PLS) models, from modest amounts of medium-throughput screening data, for predicting activity against new kinases. The Profile-QSAR predictions for the 92 kinases (115 assays) gave a median external R²(ext) = 0.59 on 25% held-out test sets. The method has proven accurate enough to predict pairwise kinase selectivities with a median correlation of R²(ext) = 0.61 for 958 kinase pairs with at least 600 common compounds. It has been further expanded by adding a "C(k)XC" cellular activity matrix to the KxC matrix to predict cellular activity for 42 kinase driven cellular assays with median R²(ext) = 0.58 for 24 target modulation assays and R²(ext) = 0.41 for 18 cell proliferation assays. The 2D Profile-QSAR, along with the 3D Surrogate AutoShim, are the foundations of an internally developed iterative medium-throughput screening (IMTS) methodology for virtual screening (VS) of compound archives as an alternative to experimental high-throughput screening (HTS). The method has been applied to 20 actual prospective kinase projects. Biological results have so far been obtained in eight of them. Q² values ranged from 0.3 to 0.7. Hit-rates at 10 uM for experimentally tested compounds varied from 25% to 80%, except in K5, which was a special case aimed specifically at finding "type II" binders, where none of the compounds were predicted to be
Oyedepo, Gbenga A; Wilson, Angela K
2010-08-26
The correlation consistent Composite Approach, ccCA [ Deyonker , N. J. ; Cundari , T. R. ; Wilson , A. K. J. Chem. Phys. 2006 , 124 , 114104 ] has been demonstrated to predict accurate thermochemical properties of chemical species that can be described by a single configurational reference state, and at reduced computational cost, as compared with ab initio methods such as CCSD(T) used in combination with large basis sets. We have developed three variants of a multireference equivalent of this successful theoretical model. The method, called the multireference correlation consistent composite approach (MR-ccCA), is designed to predict the thermochemical properties of reactive intermediates, excited state species, and transition states to within chemical accuracy (e.g., 1 kcal/mol for enthalpies of formation) of reliable experimental values. In this study, we have demonstrated the utility of MR-ccCA: (1) in the determination of the adiabatic singlet-triplet energy separations and enthalpies of formation for the ground states for a set of diradicals and unsaturated compounds, and (2) in the prediction of energetic barriers to internal rotation, in ethylene and its heavier congener, disilene. Additionally, we have utilized MR-ccCA to predict the enthalpies of formation of the low-lying excited states of all the species considered. MR-ccCA is shown to give quantitative results without reliance upon empirically derived parameters, making it suitable for application to study novel chemical systems with significant nondynamical correlation effects.
Reichen, J; Widmer, T; Cotting, J
1991-09-01
We retrospectively analyzed the predictive accuracy of serial determinations of galactose elimination capacity in 61 patients with primary biliary cirrhosis. Death was predicted from the time that the regression line describing the decline in galactose elimination capacity vs. time intersected a value of 4 mg.min-1.kg-1. Thirty-one patients exhibited decreasing galactose elimination capacity; in 11 patients it remained stable and in 19 patients only one value was available. Among those patients with decreasing galactose elimination capacity, 10 died and three underwent liver transplantation; prediction of death was accurate to 7 +/- 19 mo. This criterion incorrectly predicted death in two patients with portal-vein thrombosis; otherwise, it did better than or as well as the Mayo clinic score. The latter was also tested on our patients and was found to adequately describe risk in yet another independent population of patients with primary biliary cirrhosis. Cox regression analysis selected only bilirubin and galactose elimination capacity, however, as independent predictors of death. We submit that serial determination of galactose elimination capacity in patients with primary biliary cirrhosis may be a useful adjunct to optimize the timing of liver transplantation and to evaluate new pharmacological treatment modalities of this disease.
Hybrid Semi-numerical Simulation Scheme to Predict Transducer Outputs of Acoustic Microscopes.
Nierla, Michael; Rupitsch, Stefan
2015-12-18
We present a semi-numerical simulation method called SIRFEM, which enables the efficient prediction of high frequency transducer outputs. In particular, this is important for acoustic microscopy where the specimen under investigation is immersed in a coupling fluid. Conventional Finite Element (FE) simulations for such applications would consume too much computational power due to the required spatial and temporal discretization, especially for the coupling fluid between ultrasonic transducer and specimen. However, FE simulations are in most cases essential to consider the mode conversion at and inside the solid specimen as well as the wave propagation in its interior. SIRFEM reduces the computational effort of pure FE simulations by treating only the solid specimen and a small part of the fluid layer with FE. The propagation in the coupling fluid from transducer to specimen and back is processed by the so-called spatial impulse response (SIR). Through this hybrid approach, the number of elements as well as the number of time steps for the FE simulation can be reduced significantly, as it is presented for an axis-symmetric setup. Three B-mode images of a plane 2-D setup - computed at a transducer center frequency of 20 MHz - show that SIRFEM is, furthermore, able to predict reflections at inner structures as well as multiple reflections between those structures and the specimen's surface. For the purpose of a pure 2-D setup, the spatial impulse response of a curved-line transducer is derived and compared to the response function of a cylindrically focused aperture of negligible extend in the third spatial dimension.
NASA Astrophysics Data System (ADS)
Limaye, A. B. S.; Lamb, M. P.
2015-12-01
Terraces cut into bedrock (strath) and sediment (fill-cut) offer key constraints on river evolution over millennial timescales, and are often interpreted to form during phases of increased river vertical incision driven by changes in climate or tectonics. Yet all actively meandering channels evolve their shapes through spatial and temporal changes in lateral erosion rates. Therefore, the sparsest requirement for a meandering river to generate terraces is that the intrinsically unsteady lateral erosion rate be coupled with relief generation by vertical incision, which need not be unsteady. In principle, this basic mechanism for terrace formation by meandering rivers should be possible in all fluvial environments, including for valleys with strath or fill-cut terraces, and may overprint signals from external drivers. We have used a numerical model of a vertically incising, meandering river to identify the age and geometric properties of autogenic terraces. Simulations indicate that autogenic terraces form with a recurrence timescale, set by the rate of relief generation, which may overlap with timescales for climate change. The autogenic terraces also have predictable geometries that can include slope proportional to the ratio of vertical incision rate to lateral erosion rate, pairing, and continuous along-valley extent. We compare these simulation results to data for terrace age and geometry from several well studied natural river valleys that span a wide range in terrace sizes and geometries, rock types, tectonic settings, incision rates, and hypothesized formation mechanisms. In cases, terrace age and geometric properties are consistent with formation by meandering with constant vertical incision rates. These similarities suggest that efforts to distinguish terraces that record signals from climatic and tectonic drivers are best focused on environments where terrace ages and geometries are far different than would be predicted by a constant vertical incision model.
NASA Astrophysics Data System (ADS)
Seo, B. C.; Bradley, A.; Krajewski, W. F.
2015-12-01
The recent upgrade of dual-polarization with NEXRAD radars has assisted in improving the characterization of microphysical processes in precipitation and thus has enabled precipitation estimation based on the identified precipitation types. While this polarimetric capability promises the potential for the enhanced accuracy in quantitative precipitation estimation (QPE), recent studies show that the polarimetric estimates are still affected by uncertainties arising from the radar beam geometry/sampling space associated with the vertical variability of precipitation. The authors, first of all, focus on evaluating the NEXRAD hydrometeor classification product using ground reference data (e.g., ASOS) that provide simple categories of the observed precipitation types (e.g., rain, snow, and freezing rain). They also investigate classification uncertainty features caused by the variability of precipitation between the ground and the altitudes where radar samples. Since this variability is closely related to the atmospheric conditions (e.g., temperature) at near surface, useful information (e.g., critical thickness and temperature profile) that is not available in radar observations is retrieved from the numerical weather prediction (NWP) model data such as Rapid Refresh (RAP)/High Resolution Rapid Refresh (HRRR). The NWP retrieved information and polarimetric radar data are used together to improve the accuracy of precipitation type identification at near surface. The authors highlight major improvements and discuss limitations in the real-time application.
Alfvenic Turbulence from the Sun to 65 Solar Radii: Numerical predictions.
NASA Astrophysics Data System (ADS)
Perez, J. C.; Chandran, B. D. G.
2015-12-01
The upcoming NASA Solar Probe Plus (SPP) mission will fly to within 9 solar radii from the solar surface, about 7 times closer to the Sun than any previous spacecraft has ever reached. This historic mission will gather unprecedented remote-sensing data and the first in-situ measurements of the plasma in the solar atmosphere, which will revolutionize our knowledge and understanding of turbulence and other processes that heat the solar corona and accelerate the solar wind. This close to the Sun the background solar-wind properties are highly inhomogeneous. As a result, outward-propagating Alfven waves (AWs) arising from the random motions of the photospheric magnetic-field footpoints undergo strong non-WKB reflections and trigger a vigorous turbulent cascade. In this talk I will discuss recent progress in the understanding of reflection-driven Alfven turbulence in this scenario by means of high-resolution numerical simulations, with the goal of predicting the detailed nature of the velocity and magnetic field fluctuations that the SPP mission will measure. In particular, I will place special emphasis on relating the simulations to relevant physical mechanisms that might govern the radial evolution of the turbulence spectra of outward/inward-propagating fluctuations and discuss the conditions that lead to universal power-laws.
NASA Astrophysics Data System (ADS)
Plant, N. G.; Long, J.; Dalyander, S.; Thompson, D.; Miselis, J. L.
2013-12-01
Natural resource and hazard management of barrier islands requires an understanding of geomorphic changes associated with long-term processes and storms. Uncertainty exists in understanding how long-term processes interact with the geomorphic changes caused by storms and the resulting perturbations of the long-term evolution trajectories. We use high-resolution data sets to initialize and correct high-fidelity numerical simulations of oceanographic forcing and resulting barrier island evolution. We simulate two years of observed storms to determine the individual and cumulative impacts of these events. Results are separated into cross-shore and alongshore components of sediment transport and compared with observed topographic and bathymetric changes during these time periods. The discrete island change induced by these storms is integrated with previous knowledge of long-term net alongshore sediment transport to project island evolution. The approach has been developed and tested using data collected at the Chandeleur Island chain off the coast of Louisiana (USA). The simulation time period included impacts from tropical and winter storms, as well as a human-induced perturbation associated with construction of a sand berm along the island shoreline. The predictions and observations indicated that storm and long-term processes both contribute to the migration, lowering, and disintegration of the artificial berm and natural island. Further analysis will determine the relative importance of cross-shore and alongshore sediment transport processes and the dominant time scales that drive each of these processes and subsequent island morphologic response.
Prediction of Quality Change During Thawing of Frozen Tuna Meat by Numerical Calculation I
NASA Astrophysics Data System (ADS)
Murakami, Natsumi; Watanabe, Manabu; Suzuki, Toru
A numerical calculation method has been developed to determine the optimum thawing method for minimizing the increase of metmyoglobin content (metMb%) as an indicator of color changes in frozen tuna meat during thawing. The calculation method is configured the following two steps: a) calculation of temperature history in each part of frozen tuna meat during thawing by control volume method under the assumption of one-dimensional heat transfer, and b) calculation of metMb% based on the combination of calculated temperature history, Arrenius equation and the first-order reaction equation for the increase rate of metMb%. Thawing experiments for measuring temperature history of frozen tuna meat were carried out under the conditions of rapid thawing and slow thawing to compare the experimental data with calculated temperature history as well as the increase of metMb%. The results were coincident with the experimental data. The proposed simulation method would be useful for predicting the optimum thawing conditions in terms of metMb%.
NASA Astrophysics Data System (ADS)
Giron Palomares, Jose Benjamin; Hsieh, Sheng-Jen
2014-05-01
A methodology based on active infrared thermography to study and characterize hidden solder joint shapes on a multi cover PCB assembly was investigated. A numerical model was developed to simulate the active thermography methodology and was proven to determine the grand average cooling rates with maximum errors of 8.85% (one cover) and 13.36% (two covers). A parametric analysis was performed by varying the number of covers, heat flux provided, and the amount of heating time. Grand average cooling rate distances among contiguous solder joint shapes, as well as solder joints discriminability, were determined to be directly proportional to heat flux, and inversely proportional to the number of covers and heating time. Finally, a mathematical model was developed to determine the appropriate total amount of energy needed to discriminate among hidden solder joints with a "good" discriminability for one and two covers, and a "regular" discriminability for up to five covers. The mathematical model was proven to predict the total amount of energy to achieve a "good" discriminability for one cover within a 10% of error with respect to the experimental active thermography model.
Numerical studies of SMBH magnetospheres and observational predictions for AGNs and inner jets
NASA Astrophysics Data System (ADS)
Ford, Alex; Medvedev, Mikhail V.
2017-01-01
Electrodynamic, radiative and plasma processes around SMBHs in AGNs determine how relativistic jets are launched and how the black hole energy is extracted. The cornerstone process here is plasma production via the electron-position cascade in the so-called ``gap'' region of a SMBH force-free magnetosphere. This multi-stage process, involving particle acceleration, photon Compton up-scattering and production of e+/- secondaries, is explored numerically by computing the radial development of the entire cascade and accompanying plasma physical and radiative processes. Here we show how the e+/- plasma production depends on the black hole mass and spin, the amount and spectrum of the ambient photons and magnetic fields, and other parameters and provide empirical scaling relations. We also present the full structure of the gap region and make solid observational predictions for X-ray and gamma-ray fluxes and spectra, which can readily be compared with observations of AGNs and inner regions of their jets. Partially supported by DOE grant DE-SC0016368.
Edge resonance in semi-infinite thick pipe: numerical predictions and measurements.
Ratassepp, M; Klauson, A; Chati, F; Léon, F; Maze, G
2008-08-01
This paper presents theoretical and experimental studies of axisymmetric longitudinal guided wave L(0,2) interaction with the free edge of the pipe. A numerical method based on normal mode superposition is applied to predict the edge resonance by an analysis of dispersion relations of separate modes. In parallel, the finite element analysis and experimental measurements prove the existence of edge resonance in the pipe in case of L(0,2) wave incidence. It is shown that the edge resonance is mainly caused by the first pair of complex modes. Additionally the behavior of edge resonance phenomenon as a function of the curvature of the pipe is studied. The displacement amplitudes measured at the edge demonstrate that the edge resonance is affected by the frequency and thickness to midradius ratio of the pipe, and it is losing its strength in thicker pipes, as the growing difference between the outer and inner radii destroys symmetry. The reflected energy amplitudes show that at the resonance frequencies the incident wave is strongly converted to L(0,1) and L(0,3) modes, depending also on the curvature parameter of the pipe.
NASA Astrophysics Data System (ADS)
Liu, Y.; Tang, N.
2014-07-01
In this paper, a new issue that very low relative humidity observations exist in a deeper atmosphere layer in the low- and mid-troposphere is studied on the basis of the global radiosonde observations from December 2008 to November 2009, and the humidity retrieval productions from Formosa Satellite mission-3/Constellation Observing System for Meteorology, Ionosphere, and Climate (FORMOSAT-3/COSMIC, referred to as COSMIC hereafter) in the same period. Results show that these extremely dry relative humidity observations are considerable universal in the worldwide operational radiosonde data. Globally, the annual average occurrence probability of the extremely dry relative humidity is of 4.2%. These measurements usually occur between 20° and 40° latitudes in both Northern and Southern Hemispheres, and in the height from 700 to 450 hPa in the low- and mid-troposphere. Winter and spring are the favoured seasons for these extremely dry humidity observations, with the maximum ratio of 9.53% in the Northern Hemisphere and 16.82% in the Southern Hemisphere. The phenomenon is mainly related to the performance of the radiosonde humidity sensor and the cloud types traversed by the radiosonde balloon. These extremely low relative humidity observations are erroneous, which cannot represent the real atmospheric status, and are likely caused by the failure of humidity sensor. However, these observations have been archived as the formal data. It will affect the reliability of numerical weather prediction, the analysis of weather and climate, if the quality control procedure is not applied.
NASA Astrophysics Data System (ADS)
Mulcahy, J. P.; Walters, D. N.; Bellouin, N.; Milton, S. F.
2014-05-01
The inclusion of the direct and indirect radiative effects of aerosols in high-resolution global numerical weather prediction (NWP) models is being increasingly recognised as important for the improved accuracy of short-range weather forecasts. In this study the impacts of increasing the aerosol complexity in the global NWP configuration of the Met Office Unified Model (MetUM) are investigated. A hierarchy of aerosol representations are evaluated including three-dimensional monthly mean speciated aerosol climatologies, fully prognostic aerosols modelled using the CLASSIC aerosol scheme and finally, initialised aerosols using assimilated aerosol fields from the GEMS project. The prognostic aerosol schemes are better able to predict the temporal and spatial variation of atmospheric aerosol optical depth, which is particularly important in cases of large sporadic aerosol events such as large dust storms or forest fires. Including the direct effect of aerosols improves model biases in outgoing long-wave radiation over West Africa due to a better representation of dust. However, uncertainties in dust optical properties propagate to its direct effect and the subsequent model response. Inclusion of the indirect aerosol effects improves surface radiation biases at the North Slope of Alaska ARM site due to lower cloud amounts in high-latitude clean-air regions. This leads to improved temperature and height forecasts in this region. Impacts on the global mean model precipitation and large-scale circulation fields were found to be generally small in the short-range forecasts. However, the indirect aerosol effect leads to a strengthening of the low-level monsoon flow over the Arabian Sea and Bay of Bengal and an increase in precipitation over Southeast Asia. Regional impacts on the African Easterly Jet (AEJ) are also presented with the large dust loading in the aerosol climatology enhancing of the heat low over West Africa and weakening the AEJ. This study highlights the
NASA Astrophysics Data System (ADS)
Paulo, R. M. F.; Carlone, P.; Valente, R. A. F.; Teixeira-Dias, F.; Palazzo, G. S.
2016-10-01
In this work a numerical model is proposed to simulate Friction Stir Welding (FSW) process in AA2024-T3 plates. This model included a softening model that account for the temperature history and the hardness distribution on a welded plate can thus be predicted. The validation of the model was performed using experimental measurements of the hardness in the plate cross-section. There is an acceptable prediction of the material softening in the Heat Affected Zone (HAZ) using the adopted model.
NASA Astrophysics Data System (ADS)
Hansen-Goos, Hendrik
2016-04-01
We derive an analytical equation of state for the hard-sphere fluid that is within 0.01% of computer simulations for the whole range of the stable fluid phase. In contrast, the commonly used Carnahan-Starling equation of state deviates by up to 0.3% from simulations. The derivation uses the functional form of the isothermal compressibility from the Percus-Yevick closure of the Ornstein-Zernike relation as a starting point. Two additional degrees of freedom are introduced, which are constrained by requiring the equation of state to (i) recover the exact fourth virial coefficient B4 and (ii) involve only integer coefficients on the level of the ideal gas, while providing best possible agreement with the numerical result for B5. Virial coefficients B6 to B10 obtained from the equation of state are within 0.5% of numerical computations, and coefficients B11 and B12 are within the error of numerical results. We conjecture that even higher virial coefficients are reliably predicted.
Hansen-Goos, Hendrik
2016-04-28
We derive an analytical equation of state for the hard-sphere fluid that is within 0.01% of computer simulations for the whole range of the stable fluid phase. In contrast, the commonly used Carnahan-Starling equat