IRIS: Towards an Accurate and Fast Stage Weight Prediction Method
NASA Astrophysics Data System (ADS)
Taponier, V.; Balu, A.
2002-01-01
The knowledge of the structural mass fraction (or the mass ratio) of a given stage, which affects the performance of a rocket, is essential for the analysis of new or upgraded launchers or stages, whose need is increased by the quick evolution of the space programs and by the necessity of their adaptation to the market needs. The availability of this highly scattered variable, ranging between 0.05 and 0.15, is of primary importance at the early steps of the preliminary design studies. At the start of the staging and performance studies, the lack of frozen weight data (to be obtained later on from propulsion, trajectory and sizing studies) leads to rely on rough estimates, generally derived from printed sources and adapted. When needed, a consolidation can be acquired trough a specific analysis activity involving several techniques and implying additional effort and time. The present empirical approach allows thus to get approximated values (i.e. not necessarily accurate or consistent), inducing some result inaccuracy as well as, consequently, difficulties of performance ranking for a multiple option analysis, and an increase of the processing duration. This forms a classical harsh fact of the preliminary design system studies, insufficiently discussed to date. It appears therefore highly desirable to have, for all the evaluation activities, a reliable, fast and easy-to-use weight or mass fraction prediction method. Additionally, the latter should allow for a pre selection of the alternative preliminary configurations, making possible a global system approach. For that purpose, an attempt at modeling has been undertaken, whose objective was the determination of a parametric formulation of the mass fraction, to be expressed from a limited number of parameters available at the early steps of the project. It is based on the innovative use of a statistical method applicable to a variable as a function of several independent parameters. A specific polynomial generator
An effective method for accurate prediction of the first hyperpolarizability of alkalides.
Wang, Jia-Nan; Xu, Hong-Liang; Sun, Shi-Ling; Gao, Ting; Li, Hong-Zhi; Li, Hui; Su, Zhong-Min
2012-01-15
The proper theoretical calculation method for nonlinear optical (NLO) properties is a key factor to design the excellent NLO materials. Yet it is a difficult task to obatin the accurate NLO property of large scale molecule. In present work, an effective intelligent computing method, as called extreme learning machine-neural network (ELM-NN), is proposed to predict accurately the first hyperpolarizability (β(0)) of alkalides from low-accuracy first hyperpolarizability. Compared with neural network (NN) and genetic algorithm neural network (GANN), the root-mean-square deviations of the predicted values obtained by ELM-NN, GANN, and NN with their MP2 counterpart are 0.02, 0.08, and 0.17 a.u., respectively. It suggests that the predicted values obtained by ELM-NN are more accurate than those calculated by NN and GANN methods. Another excellent point of ELM-NN is the ability to obtain the high accuracy level calculated values with less computing cost. Experimental results show that the computing time of MP2 is 2.4-4 times of the computing time of ELM-NN. Thus, the proposed method is a potentially powerful tool in computational chemistry, and it may predict β(0) of the large scale molecules, which is difficult to obtain by high-accuracy theoretical method due to dramatic increasing computational cost.
Vincent, Mark A; Hillier, Ian H
2014-08-25
The accurate prediction of the adsorption energies of unsaturated molecules on graphene in the presence of water is essential for the design of molecules that can modify its properties and that can aid its processability. We here show that a semiempirical MO method corrected for dispersive interactions (PM6-DH2) can predict the adsorption energies of unsaturated hydrocarbons and the effect of substitution on these values to an accuracy comparable to DFT values and in good agreement with the experiment. The adsorption energies of TCNE, TCNQ, and a number of sulfonated pyrenes are also predicted, along with the effect of hydration using the COSMO model.
A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes
Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.; Arkin, Adam P.
2004-12-01
We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, and its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.
A fast and accurate method to predict 2D and 3D aerodynamic boundary layer flows
NASA Astrophysics Data System (ADS)
Bijleveld, H. A.; Veldman, A. E. P.
2014-12-01
A quasi-simultaneous interaction method is applied to predict 2D and 3D aerodynamic flows. This method is suitable for offshore wind turbine design software as it is a very accurate and computationally reasonably cheap method. This study shows the results for a NACA 0012 airfoil. The two applied solvers converge to the experimental values when the grid is refined. We also show that in separation the eigenvalues remain positive thus avoiding the Goldstein singularity at separation. In 3D we show a flow over a dent in which separation occurs. A rotating flat plat is used to show the applicability of the method for rotating flows. The shown capabilities of the method indicate that the quasi-simultaneous interaction method is suitable for design methods for offshore wind turbine blades.
NASA Astrophysics Data System (ADS)
Zacharias, Panagiotis P.; Chatzineofytou, Elpida G.; Spantideas, Sotirios T.; Capsalis, Christos N.
2016-07-01
In the present work, the determination of the magnetic behavior of localized magnetic sources from near-field measurements is examined. The distance power law of the magnetic field fall-off is used in various cases to accurately predict the magnetic signature of an equipment under test (EUT) consisting of multiple alternating current (AC) magnetic sources. Therefore, parameters concerning the location of the observation points (magnetometers) are studied towards this scope. The results clearly show that these parameters are independent of the EUT's size and layout. Additionally, the techniques developed in the present study enable the placing of the magnetometers close to the EUT, thus achieving high signal-to-noise ratio (SNR). Finally, the proposed method is verified by real measurements, using a mobile phone as an EUT.
Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer
2017-04-01
Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.
Wang, Jia-Nan; Jin, Jun-Ling; Geng, Yun; Sun, Shi-Ling; Xu, Hong-Liang; Lu, Ying-Hua; Su, Zhong-Min
2013-03-15
Recently, the extreme learning machine neural network (ELMNN) as a valid computing method has been proposed to predict the nonlinear optical property successfully (Wang et al., J. Comput. Chem. 2012, 33, 231). In this work, first, we follow this line of work to predict the electronic excitation energies using the ELMNN method. Significantly, the root mean square deviation of the predicted electronic excitation energies of 90 4,4-difluoro-4-bora-3a,4a-diaza-s-indacene (BODIPY) derivatives between the predicted and experimental values has been reduced to 0.13 eV. Second, four groups of molecule descriptors are considered when building the computing models. The results show that the quantum chemical descriptions have the closest intrinsic relation with the electronic excitation energy values. Finally, a user-friendly web server (EEEBPre: Prediction of electronic excitation energies for BODIPY dyes), which is freely accessible to public at the web site: http://202.198.129.218, has been built for prediction. This web server can return the predicted electronic excitation energy values of BODIPY dyes that are high consistent with the experimental values. We hope that this web server would be helpful to theoretical and experimental chemists in related research.
Fast and accurate numerical method for predicting gas chromatography retention time.
Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira
2015-08-07
Predictive modeling for gas chromatography compound retention depends on the retention factor (ki) and on the flow of the mobile phase. Thus, different approaches for determining an analyte ki in column chromatography have been developed. The main one is based on the thermodynamic properties of the component and on the characteristics of the stationary phase. These models can be used to estimate the parameters and to optimize the programming of temperatures, in gas chromatography, for the separation of compounds. Different authors have proposed the use of numerical methods for solving these models, but these methods demand greater computational time. Hence, a new method for solving the predictive modeling of analyte retention time is presented. This algorithm is an alternative to traditional methods because it transforms its attainments into root determination problems within defined intervals. The proposed approach allows for tr calculation, with accuracy determined by the user of the methods, and significant reductions in computational time; it can also be used to evaluate the performance of other prediction methods.
Krokhotin, Andrey; Dokholyan, Nikolay V
2015-01-01
Computational methods can provide significant insights into RNA structure and dynamics, bridging the gap in our understanding of the relationship between structure and biological function. Simulations enrich and enhance our understanding of data derived on the bench, as well as provide feasible alternatives to costly or technically challenging experiments. Coarse-grained computational models of RNA are especially important in this regard, as they allow analysis of events occurring in timescales relevant to RNA biological function, which are inaccessible through experimental methods alone. We have developed a three-bead coarse-grained model of RNA for discrete molecular dynamics simulations. This model is efficient in de novo prediction of short RNA tertiary structure, starting from RNA primary sequences of less than 50 nucleotides. To complement this model, we have incorporated additional base-pairing constraints and have developed a bias potential reliant on data obtained from hydroxyl probing experiments that guide RNA folding to its correct state. By introducing experimentally derived constraints to our computer simulations, we are able to make reliable predictions of RNA tertiary structures up to a few hundred nucleotides. Our refined model exemplifies a valuable benefit achieved through integration of computation and experimental methods.
NASA Astrophysics Data System (ADS)
Du, Xia; Zhao, Dong-Xia; Yang, Zhong-Zhi
2013-02-01
A new approach to characterize and measure bond strength has been developed. First, we propose a method to accurately calculate the potential acting on an electron in a molecule (PAEM) at the saddle point along a chemical bond in situ, denoted by Dpb. Then, a direct method to quickly evaluate bond strength is established. We choose some familiar molecules as models for benchmarking this method. As a practical application, the Dpb of base pairs in DNA along C-H and N-H bonds are obtained for the first time. All results show that C7-H of A-T and C8-H of G-C are the relatively weak bonds that are the injured positions in DNA damage. The significance of this work is twofold: (i) A method is developed to calculate Dpb of various sizable molecules in situ quickly and accurately; (ii) This work demonstrates the feasibility to quickly predict the bond strength in macromolecules.
NASA Astrophysics Data System (ADS)
Hughes, Timothy J.; Kandathil, Shaun M.; Popelier, Paul L. A.
2015-02-01
As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G**, B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol-1, decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol-1.
Hughes, Timothy J; Kandathil, Shaun M; Popelier, Paul L A
2015-02-05
As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G(**), B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol(-1), decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol(-1).
NASA Astrophysics Data System (ADS)
An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.
2017-01-01
The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.
Li, Haibo; Ding, Jie; Wen, Ping; Zhang, Qin; Xiang, Jingjing; Li, Qiong; Xuan, Liming; Kong, Lingyin; Mao, Yan; Zhu, Yijun; Shen, Jingjing; Liang, Bo; Li, Hong
2016-01-01
Massively parallel sequencing (MPS) combined with bioinformatic analysis has been widely applied to detect fetal chromosomal aneuploidies such as trisomy 21, 18, 13 and sex chromosome aneuploidies (SCAs) by sequencing cell-free fetal DNA (cffDNA) from maternal plasma, so-called non-invasive prenatal testing (NIPT). However, many technical challenges, such as dependency on correct fetal sex prediction, large variations of chromosome Y measurement and high sensitivity to random reads mapping, may result in higher false negative rate (FNR) and false positive rate (FPR) in fetal sex prediction as well as in SCAs detection. Here, we developed an optimized method to improve the accuracy of the current method by filtering out randomly mapped reads in six specific regions of the Y chromosome. The method reduces the FNR and FPR of fetal sex prediction from nearly 1% to 0.01% and 0.06%, respectively and works robustly under conditions of low fetal DNA concentration (1%) in testing and simulation of 92 samples. The optimized method was further confirmed by large scale testing (1590 samples), suggesting that it is reliable and robust enough for clinical testing. PMID:27441628
NASA Astrophysics Data System (ADS)
Russo, A.; Zuccarello, B.
2007-07-01
The paper presents a theoretical-numerical hybrid method for determining the stresses distribution in composite laminates containing a circular hole and subjected to uniaxial tensile loading. The method is based upon an appropriate corrective function allowing a simple and rapid evaluation of stress distributions in a generic plate of finite width with a hole based on the theoretical stresses distribution in an infinite plate with the same hole geometry and material. In order to verify the accuracy of the method proposed, various numerical and experimental tests have been performed by considering different laminate lay-ups; in particular, the experimental results have shown that a combined use of the method proposed and the well-know point-stress criterion leads to reliable strength predictions for GFRP or CFRP laminates with a circular hole.
NASA Astrophysics Data System (ADS)
Tan, Samuel; Barrera Acevedo, Santiago; Izgorodina, Ekaterina I.
2017-02-01
The accurate calculation of intermolecular interactions is important to our understanding of properties in large molecular systems. The high computational cost of the current "gold standard" method, coupled cluster with singles and doubles and perturbative triples (CCSD(T), limits its application to small- to medium-sized systems. Second-order Møller-Plesset perturbation (MP2) theory is a cheaper alternative for larger systems, although at the expense of its decreased accuracy, especially when treating van der Waals complexes. In this study, a new modification of the spin-component scaled MP2 method was proposed for a wide range of intermolecular complexes including two well-known datasets, S22 and S66, and a large dataset of ionic liquids consisting of 174 single ion pairs, IL174. It was found that the spin ratio, ɛΔ s=E/INT O SEIN T S S , calculated as the ratio of the opposite-spin component to the same-spin component of the interaction correlation energy fell in the range of 0.1 and 1.6, in contrast to the range of 3-4 usually observed for the ratio of absolute correlation energy, ɛs=E/OSES S , in individual molecules. Scaled coefficients were found to become negative when the spin ratio fell in close proximity to 1.0, and therefore, the studied intermolecular complexes were divided into two groups: (1) complexes with ɛΔ s< 1 and (2) complexes with ɛΔ s≥ 1 . A separate set of coefficients was obtained for both groups. Exclusion of counterpoise correction during scaling was found to produce superior results due to decreased error. Among a series of Dunning's basis sets, cc-pVTZ and cc-pVQZ were found to be the best performing ones, with a mean absolute error of 1.4 kJ mol-1 and maximum errors below 6.2 kJ mol-1. The new modification, spin-ratio scaled second-order Møller-Plesset perturbation, treats both dispersion-driven and hydrogen-bonded complexes equally well, thus validating its robustness with respect to the interaction type ranging from ionic
A predictable and accurate technique with elastomeric impression materials.
Barghi, N; Ontiveros, J C
1999-08-01
A method for obtaining more predictable and accurate final impressions with polyvinylsiloxane impression materials in conjunction with stock trays is proposed and tested. Heavy impression material is used in advance for construction of a modified custom tray, while extra-light material is used for obtaining a more accurate final impression.
Wang, Shiyao; Deng, Zhidong; Yin, Gang
2016-02-24
A high-performance differential global positioning system (GPS) receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.
Wang, Shiyao; Deng, Zhidong; Yin, Gang
2016-01-01
A high-performance differential global positioning system (GPS) receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108
New model accurately predicts reformate composition
Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )
1994-01-31
Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.
Martin, Eric; Mukherjee, Prasenjit; Sullivan, David; Jansen, Johanna
2011-08-22
Profile-QSAR is a novel 2D predictive model building method for kinases. This "meta-QSAR" method models the activity of each compound against a new kinase target as a linear combination of its predicted activities against a large panel of 92 previously studied kinases comprised from 115 assays. Profile-QSAR starts with a sparse incomplete kinase by compound (KxC) activity matrix, used to generate Bayesian QSAR models for the 92 "basis-set" kinases. These Bayesian QSARs generate a complete "synthetic" KxC activity matrix of predictions. These synthetic activities are used as "chemical descriptors" to train partial-least squares (PLS) models, from modest amounts of medium-throughput screening data, for predicting activity against new kinases. The Profile-QSAR predictions for the 92 kinases (115 assays) gave a median external R²(ext) = 0.59 on 25% held-out test sets. The method has proven accurate enough to predict pairwise kinase selectivities with a median correlation of R²(ext) = 0.61 for 958 kinase pairs with at least 600 common compounds. It has been further expanded by adding a "C(k)XC" cellular activity matrix to the KxC matrix to predict cellular activity for 42 kinase driven cellular assays with median R²(ext) = 0.58 for 24 target modulation assays and R²(ext) = 0.41 for 18 cell proliferation assays. The 2D Profile-QSAR, along with the 3D Surrogate AutoShim, are the foundations of an internally developed iterative medium-throughput screening (IMTS) methodology for virtual screening (VS) of compound archives as an alternative to experimental high-throughput screening (HTS). The method has been applied to 20 actual prospective kinase projects. Biological results have so far been obtained in eight of them. Q² values ranged from 0.3 to 0.7. Hit-rates at 10 uM for experimentally tested compounds varied from 25% to 80%, except in K5, which was a special case aimed specifically at finding "type II" binders, where none of the compounds were predicted to be
Nakatsuji, Hiroshi
2012-09-18
Just as Newtonian law governs classical physics, the Schrödinger equation (SE) and the relativistic Dirac equation (DE) rule the world of chemistry. So, if we can solve these equations accurately, we can use computation to predict chemistry precisely. However, for approximately 80 years after the discovery of these equations, chemists believed that they could not solve SE and DE for atoms and molecules that included many electrons. This Account reviews ideas developed over the past decade to further the goal of predictive quantum chemistry. Between 2000 and 2005, I discovered a general method of solving the SE and DE accurately. As a first inspiration, I formulated the structure of the exact wave function of the SE in a compact mathematical form. The explicit inclusion of the exact wave function's structure within the variational space allows for the calculation of the exact wave function as a solution of the variational method. Although this process sounds almost impossible, it is indeed possible, and I have published several formulations and applied them to solve the full configuration interaction (CI) with a very small number of variables. However, when I examined analytical solutions for atoms and molecules, the Hamiltonian integrals in their secular equations diverged. This singularity problem occurred in all atoms and molecules because it originates from the singularity of the Coulomb potential in their Hamiltonians. To overcome this problem, I first introduced the inverse SE and then the scaled SE. The latter simpler idea led to immediate and surprisingly accurate solution for the SEs of the hydrogen atom, helium atom, and hydrogen molecule. The free complement (FC) method, also called the free iterative CI (free ICI) method, was efficient for solving the SEs. In the FC method, the basis functions that span the exact wave function are produced by the Hamiltonian of the system and the zeroth-order wave function. These basis functions are called complement
Accurate methods for large molecular systems.
Gordon, Mark S; Mullin, Jonathan M; Pruitt, Spencer R; Roskop, Luke B; Slipchenko, Lyudmila V; Boatz, Jerry A
2009-07-23
Three exciting new methods that address the accurate prediction of processes and properties of large molecular systems are discussed. The systematic fragmentation method (SFM) and the fragment molecular orbital (FMO) method both decompose a large molecular system (e.g., protein, liquid, zeolite) into small subunits (fragments) in very different ways that are designed to both retain the high accuracy of the chosen quantum mechanical level of theory while greatly reducing the demands on computational time and resources. Each of these methods is inherently scalable and is therefore eminently capable of taking advantage of massively parallel computer hardware while retaining the accuracy of the corresponding electronic structure method from which it is derived. The effective fragment potential (EFP) method is a sophisticated approach for the prediction of nonbonded and intermolecular interactions. Therefore, the EFP method provides a way to further reduce the computational effort while retaining accuracy by treating the far-field interactions in place of the full electronic structure method. The performance of the methods is demonstrated using applications to several systems, including benzene dimer, small organic species, pieces of the alpha helix, water, and ionic liquids.
Accurate, meshless methods for magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.; Raives, Matthias J.
2016-01-01
Recently, we explored new meshless finite-volume Lagrangian methods for hydrodynamics: the `meshless finite mass' (MFM) and `meshless finite volume' (MFV) methods; these capture advantages of both smoothed particle hydrodynamics (SPH) and adaptive mesh refinement (AMR) schemes. We extend these to include ideal magnetohydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains nabla \\cdot B≈ 0. We implement these in the code GIZMO, together with state-of-the-art SPH MHD. We consider a large test suite, and show that on all problems the new methods are competitive with AMR using constrained transport (CT) to ensure nabla \\cdot B=0. They correctly capture the growth/structure of the magnetorotational instability, MHD turbulence, and launching of magnetic jets, in some cases converging more rapidly than state-of-the-art AMR. Compared to SPH, the MFM/MFV methods exhibit convergence at fixed neighbour number, sharp shock-capturing, and dramatically reduced noise, divergence errors, and diffusion. Still, `modern' SPH can handle most test problems, at the cost of larger kernels and `by hand' adjustment of artificial diffusion. Compared to non-moving meshes, the new methods exhibit enhanced `grid noise' but reduced advection errors and diffusion, easily include self-gravity, and feature velocity-independent errors and superior angular momentum conservation. They converge more slowly on some problems (smooth, slow-moving flows), but more rapidly on others (involving advection/rotation). In all cases, we show divergence control beyond the Powell 8-wave approach is necessary, or all methods can converge to unphysical answers even at high resolution.
A gene expression biomarker accurately predicts estrogen ...
The EPA’s vision for the Endocrine Disruptor Screening Program (EDSP) in the 21st Century (EDSP21) includes utilization of high-throughput screening (HTS) assays coupled with computational modeling to prioritize chemicals with the goal of eventually replacing current Tier 1 screening tests. The ToxCast program currently includes 18 HTS in vitro assays that evaluate the ability of chemicals to modulate estrogen receptor α (ERα), an important endocrine target. We propose microarray-based gene expression profiling as a complementary approach to predict ERα modulation and have developed computational methods to identify ERα modulators in an existing database of whole-genome microarray data. The ERα biomarker consisted of 46 ERα-regulated genes with consistent expression patterns across 7 known ER agonists and 3 known ER antagonists. The biomarker was evaluated as a predictive tool using the fold-change rank-based Running Fisher algorithm by comparison to annotated gene expression data sets from experiments in MCF-7 cells. Using 141 comparisons from chemical- and hormone-treated cells, the biomarker gave a balanced accuracy for prediction of ERα activation or suppression of 94% or 93%, respectively. The biomarker was able to correctly classify 18 out of 21 (86%) OECD ER reference chemicals including “very weak” agonists and replicated predictions based on 18 in vitro ER-associated HTS assays. For 114 chemicals present in both the HTS data and the MCF-7 c
Accurate method for computing correlated color temperature.
Li, Changjun; Cui, Guihua; Melgosa, Manuel; Ruan, Xiukai; Zhang, Yaoju; Ma, Long; Xiao, Kaida; Luo, M Ronnier
2016-06-27
For the correlated color temperature (CCT) of a light source to be estimated, a nonlinear optimization problem must be solved. In all previous methods available to compute CCT, the objective function has only been approximated, and their predictions have achieved limited accuracy. For example, different unacceptable CCT values have been predicted for light sources located on the same isotemperature line. In this paper, we propose to compute CCT using the Newton method, which requires the first and second derivatives of the objective function. Following the current recommendation by the International Commission on Illumination (CIE) for the computation of tristimulus values (summations at 1 nm steps from 360 nm to 830 nm), the objective function and its first and second derivatives are explicitly given and used in our computations. Comprehensive tests demonstrate that the proposed method, together with an initial estimation of CCT using Robertson's method [J. Opt. Soc. Am. 58, 1528-1535 (1968)], gives highly accurate predictions below 0.0012 K for light sources with CCTs ranging from 500 K to 10^{6} K.
On the Accurate Prediction of CME Arrival At the Earth
NASA Astrophysics Data System (ADS)
Zhang, Jie; Hess, Phillip
2016-07-01
We will discuss relevant issues regarding the accurate prediction of CME arrival at the Earth, from both observational and theoretical points of view. In particular, we clarify the importance of separating the study of CME ejecta from the ejecta-driven shock in interplanetary CMEs (ICMEs). For a number of CME-ICME events well observed by SOHO/LASCO, STEREO-A and STEREO-B, we carry out the 3-D measurements by superimposing geometries onto both the ejecta and sheath separately. These measurements are then used to constrain a Drag-Based Model, which is improved through a modification of including height dependence of the drag coefficient into the model. Combining all these factors allows us to create predictions for both fronts at 1 AU and compare with actual in-situ observations. We show an ability to predict the sheath arrival with an average error of under 4 hours, with an RMS error of about 1.5 hours. For the CME ejecta, the error is less than two hours with an RMS error within an hour. Through using the best observations of CMEs, we show the power of our method in accurately predicting CME arrival times. The limitation and implications of our accurate prediction method will be discussed.
You Can Accurately Predict Land Acquisition Costs.
ERIC Educational Resources Information Center
Garrigan, Richard
1967-01-01
Land acquisition costs were tested for predictability based upon the 1962 assessed valuations of privately held land acquired for campus expansion by the University of Wisconsin from 1963-1965. By correlating the land acquisition costs of 108 properties acquired during the 3 year period with--(1) the assessed value of the land, (2) the assessed…
Towards more accurate vegetation mortality predictions
Sevanto, Sanna Annika; Xu, Chonggang
2016-09-26
Predicting the fate of vegetation under changing climate is one of the major challenges of the climate modeling community. Here, terrestrial vegetation dominates the carbon and water cycles over land areas, and dramatic changes in vegetation cover resulting from stressful environmental conditions such as drought feed directly back to local and regional climate, potentially leading to a vicious cycle where vegetation recovery after a disturbance is delayed or impossible.
Can Selforganizing Maps Accurately Predict Photometric Redshifts?
NASA Technical Reports Server (NTRS)
Way, Michael J.; Klose, Christian
2012-01-01
We present an unsupervised machine-learning approach that can be employed for estimating photometric redshifts. The proposed method is based on a vector quantization called the self-organizing-map (SOM) approach. A variety of photometrically derived input values were utilized from the Sloan Digital Sky Survey's main galaxy sample, luminous red galaxy, and quasar samples, along with the PHAT0 data set from the Photo-z Accuracy Testing project. Regression results obtained with this new approach were evaluated in terms of root-mean-square error (RMSE) to estimate the accuracy of the photometric redshift estimates. The results demonstrate competitive RMSE and outlier percentages when compared with several other popular approaches, such as artificial neural networks and Gaussian process regression. SOM RMSE results (using delta(z) = z(sub phot) - z(sub spec)) are 0.023 for the main galaxy sample, 0.027 for the luminous red galaxy sample, 0.418 for quasars, and 0.022 for PHAT0 synthetic data. The results demonstrate that there are nonunique solutions for estimating SOM RMSEs. Further research is needed in order to find more robust estimation techniques using SOMs, but the results herein are a positive indication of their capabilities when compared with other well-known methods
Bertels, Luke W.; Mazziotti, David A.
2014-07-28
Multireference correlation in diradical molecules can be captured by a single-reference 2-electron reduced-density-matrix (2-RDM) calculation with only single and double excitations in the 2-RDM parametrization. The 2-RDM parametrization is determined by N-representability conditions that are non-perturbative in their treatment of the electron correlation. Conventional single-reference wave function methods cannot describe the entanglement within diradical molecules without employing triple- and potentially even higher-order excitations of the mean-field determinant. In the isomerization of bicyclobutane to gauche-1,3-butadiene the parametric 2-RDM (p2-RDM) method predicts that the diradical disrotatory transition state is 58.9 kcal/mol above bicyclobutane. This barrier is in agreement with previous multireference calculations as well as recent Monte Carlo and higher-order coupled cluster calculations. The p2-RDM method predicts the Nth natural-orbital occupation number of the transition state to be 0.635, revealing its diradical character. The optimized geometry from the p2-RDM method differs in important details from the complete-active-space self-consistent-field geometry used in many previous studies including the Monte Carlo calculation.
Accurate torque-speed performance prediction for brushless dc motors
NASA Astrophysics Data System (ADS)
Gipper, Patrick D.
Desirable characteristics of the brushless dc motor (BLDCM) have resulted in their application for electrohydrostatic (EH) and electromechanical (EM) actuation systems. But to effectively apply the BLDCM requires accurate prediction of performance. The minimum necessary performance characteristics are motor torque versus speed, peak and average supply current and efficiency. BLDCM nonlinear simulation software specifically adapted for torque-speed prediction is presented. The capability of the software to quickly and accurately predict performance has been verified on fractional to integral HP motor sizes, and is presented. Additionally, the capability of torque-speed prediction with commutation angle advance is demonstrated.
Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.
2012-04-01
Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear-quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18-30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8-30.9 Gy) and 22.0 Gy (range, 20.2-26.6 Gy), respectively. By use of conventional values for {alpha}/{beta}, volume parameter n, 50% complication probability dose TD{sub 50}, and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of {alpha}/{beta} and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of {alpha}/{beta} and n yielded better predictions (0.7 complications), with n = 0.023 and {alpha}/{beta} = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high {alpha}/{beta} value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models
Accurate upwind methods for the Euler equations
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1993-01-01
A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.
Mouse models of human AML accurately predict chemotherapy response
Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.
2009-01-01
The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691
Mouse models of human AML accurately predict chemotherapy response.
Zuber, Johannes; Radtke, Ina; Pardee, Timothy S; Zhao, Zhen; Rappaport, Amy R; Luo, Weijun; McCurrach, Mila E; Yang, Miao-Miao; Dolan, M Eileen; Kogan, Scott C; Downing, James R; Lowe, Scott W
2009-04-01
The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients.
Practical aspects of spatially high accurate methods
NASA Technical Reports Server (NTRS)
Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.
1992-01-01
The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.
Accurate paleointensities - the multi-method approach
NASA Astrophysics Data System (ADS)
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.
Deng, Xin; Gumm, Jordan; Karki, Suman; Eickholt, Jesse; Cheng, Jianlin
2015-01-01
Protein disordered regions are segments of a protein chain that do not adopt a stable structure. Thus far, a variety of protein disorder prediction methods have been developed and have been widely used, not only in traditional bioinformatics domains, including protein structure prediction, protein structure determination and function annotation, but also in many other biomedical fields. The relationship between intrinsically-disordered proteins and some human diseases has played a significant role in disorder prediction in disease identification and epidemiological investigations. Disordered proteins can also serve as potential targets for drug discovery with an emphasis on the disordered-to-ordered transition in the disordered binding regions, and this has led to substantial research in drug discovery or design based on protein disordered region prediction. Furthermore, protein disorder prediction has also been applied to healthcare by predicting the disease risk of mutations in patients and studying the mechanistic basis of diseases. As the applications of disorder prediction increase, so too does the need to make quick and accurate predictions. To fill this need, we also present a new approach to predict protein residue disorder using wide sequence windows that is applicable on the genomic scale. PMID:26198229
Deng, Xin; Gumm, Jordan; Karki, Suman; Eickholt, Jesse; Cheng, Jianlin
2015-07-07
Protein disordered regions are segments of a protein chain that do not adopt a stable structure. Thus far, a variety of protein disorder prediction methods have been developed and have been widely used, not only in traditional bioinformatics domains, including protein structure prediction, protein structure determination and function annotation, but also in many other biomedical fields. The relationship between intrinsically-disordered proteins and some human diseases has played a significant role in disorder prediction in disease identification and epidemiological investigations. Disordered proteins can also serve as potential targets for drug discovery with an emphasis on the disordered-to-ordered transition in the disordered binding regions, and this has led to substantial research in drug discovery or design based on protein disordered region prediction. Furthermore, protein disorder prediction has also been applied to healthcare by predicting the disease risk of mutations in patients and studying the mechanistic basis of diseases. As the applications of disorder prediction increase, so too does the need to make quick and accurate predictions. To fill this need, we also present a new approach to predict protein residue disorder using wide sequence windows that is applicable on the genomic scale.
Mapping methods for computationally efficient and accurate structural reliability
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1992-01-01
Mapping methods are developed to improve the accuracy and efficiency of probabilistic structural analyses with coarse finite element meshes. The mapping methods consist of: (1) deterministic structural analyses with fine (convergent) finite element meshes, (2) probabilistic structural analyses with coarse finite element meshes, (3) the relationship between the probabilistic structural responses from the coarse and fine finite element meshes, and (4) a probabilistic mapping. The results show that the scatter of the probabilistic structural responses and structural reliability can be accurately predicted using a coarse finite element model with proper mapping methods. Therefore, large structures can be analyzed probabilistically using finite element methods.
Passive samplers accurately predict PAH levels in resident crayfish.
Paulik, L Blair; Smith, Brian W; Bergmann, Alan J; Sower, Greg J; Forsberg, Norman D; Teeguarden, Justin G; Anderson, Kim A
2016-02-15
Contamination of resident aquatic organisms is a major concern for environmental risk assessors. However, collecting organisms to estimate risk is often prohibitively time and resource-intensive. Passive sampling accurately estimates resident organism contamination, and it saves time and resources. This study used low density polyethylene (LDPE) passive water samplers to predict polycyclic aromatic hydrocarbon (PAH) levels in signal crayfish, Pacifastacus leniusculus. Resident crayfish were collected at 5 sites within and outside of the Portland Harbor Superfund Megasite (PHSM) in the Willamette River in Portland, Oregon. LDPE deployment was spatially and temporally paired with crayfish collection. Crayfish visceral and tail tissue, as well as water-deployed LDPE, were extracted and analyzed for 62 PAHs using GC-MS/MS. Freely-dissolved concentrations (Cfree) of PAHs in water were calculated from concentrations in LDPE. Carcinogenic risks were estimated for all crayfish tissues, using benzo[a]pyrene equivalent concentrations (BaPeq). ∑PAH were 5-20 times higher in viscera than in tails, and ∑BaPeq were 6-70 times higher in viscera than in tails. Eating only tail tissue of crayfish would therefore significantly reduce carcinogenic risk compared to also eating viscera. Additionally, PAH levels in crayfish were compared to levels in crayfish collected 10 years earlier. PAH levels in crayfish were higher upriver of the PHSM and unchanged within the PHSM after the 10-year period. Finally, a linear regression model predicted levels of 34 PAHs in crayfish viscera with an associated R-squared value of 0.52 (and a correlation coefficient of 0.72), using only the Cfree PAHs in water. On average, the model predicted PAH concentrations in crayfish tissue within a factor of 2.4 ± 1.8 of measured concentrations. This affirms that passive water sampling accurately estimates PAH contamination in crayfish. Furthermore, the strong predictive ability of this simple model suggests
Inverter Modeling For Accurate Energy Predictions Of Tracking HCPV Installations
NASA Astrophysics Data System (ADS)
Bowman, J.; Jensen, S.; McDonald, Mark
2010-10-01
High efficiency high concentration photovoltaic (HCPV) solar plants of megawatt scale are now operational, and opportunities for expanded adoption are plentiful. However, effective bidding for sites requires reliable prediction of energy production. HCPV module nameplate power is rated for specific test conditions; however, instantaneous HCPV power varies due to site specific irradiance and operating temperature, and is degraded by soiling, protective stowing, shading, and electrical connectivity. These factors interact with the selection of equipment typically supplied by third parties, e.g., wire gauge and inverters. We describe a time sequence model accurately accounting for these effects that predicts annual energy production, with specific reference to the impact of the inverter on energy output and interactions between system-level design decisions and the inverter. We will also show two examples, based on an actual field design, of inverter efficiency calculations and the interaction between string arrangements and inverter selection.
Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates
Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; Rose, Kristie L.; Tabb, David L.
2013-03-07
In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of charged peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.
Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates
Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; ...
2013-03-07
In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of chargedmore » peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.« less
Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates
Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel C.; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; Rose, Kristie L.; Tabb, David L.
2013-01-01
In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of charged peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification. PMID:23499924
Prediction of Preoperative Anxiety in Children: Who is Most Accurate?
MacLaren, Jill E.; Thompson, Caitlin; Weinberg, Megan; Fortier, Michelle A.; Morrison, Debra E.; Perret, Danielle; Kain, Zeev N.
2009-01-01
Background In this investigation, we sought to assess the ability of pediatric attending anesthesiologists, resident anesthesiologists and mothers to predict anxiety during induction of anesthesia in 2 to 16-year-old children (n=125). Methods Anesthesiologists and mothers provided predictions using a visual analog scale and children's anxiety was assessed using a valid behavior observation tool the Modified Yale Preoperative Anxiety Scale (mYPAS). All mothers were present during anesthetic induction and no child received sedative premedication. Correlational analyses were conducted. Results A total of 125 children aged 2 to 16 years, their mothers, and their attending pediatric anesthesiologists and resident anesthesiologists were studied. Correlational analyses revealed significant associations between attending predictions and child anxiety at induction (rs= 0.38, p<0.001). Resident anesthesiologist and mother predictions were not significantly related to children's anxiety during induction (rs = 0.01 and 0.001, respectively). In terms of accuracy of prediction, 47.2% of predictions made by attending anesthesiologists were within one standard deviation of the observed anxiety exhibited by the child, and 70.4% of predictions were within 2 standard deviations. Conclusions We conclude that attending anesthesiologists who practice in pediatric settings are better than mothers in predicting the anxiety of children during induction of anesthesia. While this finding has significant clinical implications, it is unclear if it can be extended to attending anesthesiologists whose practice is not mostly pediatric anesthesia. PMID:19448201
Second Order Accurate Finite Difference Methods
1984-08-20
a study of the idealized material has direct applications to some polymer structures (4, 5). Wave propagation studies in hyperelastic materials have...34Acceleration Wave Propagation in Hyperelastic Rods of Variable Cross- section. Wave Motion, V4, pp. 173-180, 1982. 9. M. Hirao and N. Sugimoto...Waves in Hyperelastic Road," Quart. Appl. Math., V37, pp. 377-399, 1979. 11. G. A. Sod. "A Survey of Several Finite Difference Methods for Systems of
Turbulence Models for Accurate Aerothermal Prediction in Hypersonic Flows
NASA Astrophysics Data System (ADS)
Zhang, Xiang-Hong; Wu, Yi-Zao; Wang, Jiang-Feng
Accurate description of the aerodynamic and aerothermal environment is crucial to the integrated design and optimization for high performance hypersonic vehicles. In the simulation of aerothermal environment, the effect of viscosity is crucial. The turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating. In this paper, three turbulent models were studied: the one-equation eddy viscosity transport model of Spalart-Allmaras, the Wilcox k-ω model and the Menter SST model. For the k-ω model and SST model, the compressibility correction, press dilatation and low Reynolds number correction were considered. The influence of these corrections for flow properties were discussed by comparing with the results without corrections. In this paper the emphasis is on the assessment and evaluation of the turbulence models in prediction of heat transfer as applied to a range of hypersonic flows with comparison to experimental data. This will enable establishing factor of safety for the design of thermal protection systems of hypersonic vehicle.
Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics
Noecker, Cecilia; Schaefer, Krista; Zaccheo, Kelly; Yang, Yiding; Day, Judy; Ganusov, Vitaly V.
2015-01-01
Upon infection of a new host, human immunodeficiency virus (HIV) replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV). First, we found that the mode of virus production by infected cells (budding vs. bursting) has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral dose. These results
Fast and accurate predictions of covalent bonds in chemical space
NASA Astrophysics Data System (ADS)
Chang, K. Y. Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole
2016-05-01
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (˜1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H 2+ . Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi
Fast and accurate predictions of covalent bonds in chemical space.
Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole
2016-05-07
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi
Standardized EEG interpretation accurately predicts prognosis after cardiac arrest
Rossetti, Andrea O.; van Rootselaar, Anne-Fleur; Wesenberg Kjaer, Troels; Horn, Janneke; Ullén, Susann; Friberg, Hans; Nielsen, Niklas; Rosén, Ingmar; Åneman, Anders; Erlinge, David; Gasche, Yvan; Hassager, Christian; Hovdenes, Jan; Kjaergaard, Jesper; Kuiper, Michael; Pellis, Tommaso; Stammet, Pascal; Wanscher, Michael; Wetterslev, Jørn; Wise, Matt P.; Cronberg, Tobias
2016-01-01
Objective: To identify reliable predictors of outcome in comatose patients after cardiac arrest using a single routine EEG and standardized interpretation according to the terminology proposed by the American Clinical Neurophysiology Society. Methods: In this cohort study, 4 EEG specialists, blinded to outcome, evaluated prospectively recorded EEGs in the Target Temperature Management trial (TTM trial) that randomized patients to 33°C vs 36°C. Routine EEG was performed in patients still comatose after rewarming. EEGs were classified into highly malignant (suppression, suppression with periodic discharges, burst-suppression), malignant (periodic or rhythmic patterns, pathological or nonreactive background), and benign EEG (absence of malignant features). Poor outcome was defined as best Cerebral Performance Category score 3–5 until 180 days. Results: Eight TTM sites randomized 202 patients. EEGs were recorded in 103 patients at a median 77 hours after cardiac arrest; 37% had a highly malignant EEG and all had a poor outcome (specificity 100%, sensitivity 50%). Any malignant EEG feature had a low specificity to predict poor prognosis (48%) but if 2 malignant EEG features were present specificity increased to 96% (p < 0.001). Specificity and sensitivity were not significantly affected by targeted temperature or sedation. A benign EEG was found in 1% of the patients with a poor outcome. Conclusions: Highly malignant EEG after rewarming reliably predicted poor outcome in half of patients without false predictions. An isolated finding of a single malignant feature did not predict poor outcome whereas a benign EEG was highly predictive of a good outcome. PMID:26865516
Improved nonlinear prediction method
NASA Astrophysics Data System (ADS)
Adenan, Nur Hamiza; Md Noorani, Mohd Salmi
2014-06-01
The analysis and prediction of time series data have been addressed by researchers. Many techniques have been developed to be applied in various areas, such as weather forecasting, financial markets and hydrological phenomena involving data that are contaminated by noise. Therefore, various techniques to improve the method have been introduced to analyze and predict time series data. In respect of the importance of analysis and the accuracy of the prediction result, a study was undertaken to test the effectiveness of the improved nonlinear prediction method for data that contain noise. The improved nonlinear prediction method involves the formation of composite serial data based on the successive differences of the time series. Then, the phase space reconstruction was performed on the composite data (one-dimensional) to reconstruct a number of space dimensions. Finally the local linear approximation method was employed to make a prediction based on the phase space. This improved method was tested with data series Logistics that contain 0%, 5%, 10%, 20% and 30% of noise. The results show that by using the improved method, the predictions were found to be in close agreement with the observed ones. The correlation coefficient was close to one when the improved method was applied on data with up to 10% noise. Thus, an improvement to analyze data with noise without involving any noise reduction method was introduced to predict the time series data.
Accurate Prediction of Ligand Affinities for a Proton-Dependent Oligopeptide Transporter
Samsudin, Firdaus; Parker, Joanne L.; Sansom, Mark S.P.; Newstead, Simon; Fowler, Philip W.
2016-01-01
Summary Membrane transporters are critical modulators of drug pharmacokinetics, efficacy, and safety. One example is the proton-dependent oligopeptide transporter PepT1, also known as SLC15A1, which is responsible for the uptake of the β-lactam antibiotics and various peptide-based prodrugs. In this study, we modeled the binding of various peptides to a bacterial homolog, PepTSt, and evaluated a range of computational methods for predicting the free energy of binding. Our results show that a hybrid approach (endpoint methods to classify peptides into good and poor binders and a theoretically exact method for refinement) is able to accurately predict affinities, which we validated using proteoliposome transport assays. Applying the method to a homology model of PepT1 suggests that the approach requires a high-quality structure to be accurate. Our study provides a blueprint for extending these computational methodologies to other pharmaceutically important transporter families. PMID:27028887
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
Change in BMI Accurately Predicted by Social Exposure to Acquaintances
Oloritun, Rahman O.; Ouarda, Taha B. M. J.; Moturu, Sai; Madan, Anmol; Pentland, Alex (Sandy); Khayal, Inas
2013-01-01
Research has mostly focused on obesity and not on processes of BMI change more generally, although these may be key factors that lead to obesity. Studies have suggested that obesity is affected by social ties. However these studies used survey based data collection techniques that may be biased toward select only close friends and relatives. In this study, mobile phone sensing techniques were used to routinely capture social interaction data in an undergraduate dorm. By automating the capture of social interaction data, the limitations of self-reported social exposure data are avoided. This study attempts to understand and develop a model that best describes the change in BMI using social interaction data. We evaluated a cohort of 42 college students in a co-located university dorm, automatically captured via mobile phones and survey based health-related information. We determined the most predictive variables for change in BMI using the least absolute shrinkage and selection operator (LASSO) method. The selected variables, with gender, healthy diet category, and ability to manage stress, were used to build multiple linear regression models that estimate the effect of exposure and individual factors on change in BMI. We identified the best model using Akaike Information Criterion (AIC) and R2. This study found a model that explains 68% (p<0.0001) of the variation in change in BMI. The model combined social interaction data, especially from acquaintances, and personal health-related information to explain change in BMI. This is the first study taking into account both interactions with different levels of social interaction and personal health-related information. Social interactions with acquaintances accounted for more than half the variation in change in BMI. This suggests the importance of not only individual health information but also the significance of social interactions with people we are exposed to, even people we may not consider as close friends. PMID
A Simple and Accurate Method for Measuring Enzyme Activity.
ERIC Educational Resources Information Center
Yip, Din-Yan
1997-01-01
Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…
Is Three-Dimensional Soft Tissue Prediction by Software Accurate?
Nam, Ki-Uk; Hong, Jongrak
2015-11-01
The authors assessed whether virtual surgery, performed with a soft tissue prediction program, could correctly simulate the actual surgical outcome, focusing on soft tissue movement. Preoperative and postoperative computed tomography (CT) data for 29 patients, who had undergone orthognathic surgery, were obtained and analyzed using the Simplant Pro software. The program made a predicted soft tissue image (A) based on presurgical CT data. After the operation, we obtained actual postoperative CT data and an actual soft tissue image (B) was generated. Finally, the 2 images (A and B) were superimposed and analyzed differences between the A and B. Results were grouped in 2 classes: absolute values and vector values. In the absolute values, the left mouth corner was the most significant error point (2.36 mm). The right mouth corner (2.28 mm), labrale inferius (2.08 mm), and the pogonion (2.03 mm) also had significant errors. In vector values, prediction of the right-left side had a left-sided tendency, the superior-inferior had a superior tendency, and the anterior-posterior showed an anterior tendency. As a result, with this program, the position of points tended to be located more left, anterior, and superior than the "real" situation. There is a need to improve the prediction accuracy for soft tissue images. Such software is particularly valuable in predicting craniofacial soft tissues landmarks, such as the pronasale. With this software, landmark positions were most inaccurate in terms of anterior-posterior predictions.
Fast and accurate automatic structure prediction with HHpred.
Hildebrand, Andrea; Remmert, Michael; Biegert, Andreas; Söding, Johannes
2009-01-01
Automated protein structure prediction is becoming a mainstream tool for biological research. This has been fueled by steady improvements of publicly available automated servers over the last decade, in particular their ability to build good homology models for an increasing number of targets by reliably detecting and aligning more and more remotely homologous templates. Here, we describe the three fully automated versions of the HHpred server that participated in the community-wide blind protein structure prediction competition CASP8. What makes HHpred unique is the combination of usability, short response times (typically under 15 min) and a model accuracy that is competitive with those of the best servers in CASP8.
Accurate perception of negative emotions predicts functional capacity in schizophrenia.
Abram, Samantha V; Karpouzian, Tatiana M; Reilly, James L; Derntl, Birgit; Habel, Ute; Smith, Matthew J
2014-04-30
Several studies suggest facial affect perception (FAP) deficits in schizophrenia are linked to poorer social functioning. However, whether reduced functioning is associated with inaccurate perception of specific emotional valence or a global FAP impairment remains unclear. The present study examined whether impairment in the perception of specific emotional valences (positive, negative) and neutrality were uniquely associated with social functioning, using a multimodal social functioning battery. A sample of 59 individuals with schizophrenia and 41 controls completed a computerized FAP task, and measures of functional capacity, social competence, and social attainment. Participants also underwent neuropsychological testing and symptom assessment. Regression analyses revealed that only accurately perceiving negative emotions explained significant variance (7.9%) in functional capacity after accounting for neurocognitive function and symptoms. Partial correlations indicated that accurately perceiving anger, in particular, was positively correlated with functional capacity. FAP for positive, negative, or neutral emotions were not related to social competence or social attainment. Our findings were consistent with prior literature suggesting negative emotions are related to functional capacity in schizophrenia. Furthermore, the observed relationship between perceiving anger and performance of everyday living skills is novel and warrants further exploration.
Towards Accurate Ab Initio Predictions of the Spectrum of Methane
NASA Technical Reports Server (NTRS)
Schwenke, David W.; Kwak, Dochan (Technical Monitor)
2001-01-01
We have carried out extensive ab initio calculations of the electronic structure of methane, and these results are used to compute vibrational energy levels. We include basis set extrapolations, core-valence correlation, relativistic effects, and Born- Oppenheimer breakdown terms in our calculations. Our ab initio predictions of the lowest lying levels are superb.
Accurate Theoretical Prediction of the Properties of Energetic Materials
2007-11-02
calculations (e.g. Cheetah ). 8. Sensitivity. The structure prediction and lattice potential work will serve as a platform to examine impact/shock...nitromethane molecules. (In an extension of the present work, we will freeze the internal coordinates of the molecules and assess the extent to which the
Learning regulatory programs that accurately predict differential expression with MEDUSA.
Kundaje, Anshul; Lianoglou, Steve; Li, Xuejing; Quigley, David; Arias, Marta; Wiggins, Chris H; Zhang, Li; Leslie, Christina
2007-12-01
Inferring gene regulatory networks from high-throughput genomic data is one of the central problems in computational biology. In this paper, we describe a predictive modeling approach for studying regulatory networks, based on a machine learning algorithm called MEDUSA. MEDUSA integrates promoter sequence, mRNA expression, and transcription factor occupancy data to learn gene regulatory programs that predict the differential expression of target genes. Instead of using clustering or correlation of expression profiles to infer regulatory relationships, MEDUSA determines condition-specific regulators and discovers regulatory motifs that mediate the regulation of target genes. In this way, MEDUSA meaningfully models biological mechanisms of transcriptional regulation. MEDUSA solves the problem of predicting the differential (up/down) expression of target genes by using boosting, a technique from statistical learning, which helps to avoid overfitting as the algorithm searches through the high-dimensional space of potential regulators and sequence motifs. Experimental results demonstrate that MEDUSA achieves high prediction accuracy on held-out experiments (test data), that is, data not seen in training. We also present context-specific analysis of MEDUSA regulatory programs for DNA damage and hypoxia, demonstrating that MEDUSA identifies key regulators and motifs in these processes. A central challenge in the field is the difficulty of validating reverse-engineered networks in the absence of a gold standard. Our approach of learning regulatory programs provides at least a partial solution for the problem: MEDUSA's prediction accuracy on held-out data gives a concrete and statistically sound way to validate how well the algorithm performs. With MEDUSA, statistical validation becomes a prerequisite for hypothesis generation and network building rather than a secondary consideration.
Kieslich, Chris A; Tamamis, Phanourios; Guzman, Yannis A; Onel, Melis; Floudas, Christodoulos A
2016-01-01
HIV-1 entry into host cells is mediated by interactions between the V3-loop of viral glycoprotein gp120 and chemokine receptor CCR5 or CXCR4, collectively known as HIV-1 coreceptors. Accurate genotypic prediction of coreceptor usage is of significant clinical interest and determination of the factors driving tropism has been the focus of extensive study. We have developed a method based on nonlinear support vector machines to elucidate the interacting residue pairs driving coreceptor usage and provide highly accurate coreceptor usage predictions. Our models utilize centroid-centroid interaction energies from computationally derived structures of the V3-loop:coreceptor complexes as primary features, while additional features based on established rules regarding V3-loop sequences are also investigated. We tested our method on 2455 V3-loop sequences of various lengths and subtypes, and produce a median area under the receiver operator curve of 0.977 based on 500 runs of 10-fold cross validation. Our study is the first to elucidate a small set of specific interacting residue pairs between the V3-loop and coreceptors capable of predicting coreceptor usage with high accuracy across major HIV-1 subtypes. The developed method has been implemented as a web tool named CRUSH, CoReceptor USage prediction for HIV-1, which is available at http://ares.tamu.edu/CRUSH/.
How Accurately Can We Predict Eclipses for Algol? (Poster abstract)
NASA Astrophysics Data System (ADS)
Turner, D.
2016-06-01
(Abstract only) beta Persei, or Algol, is a very well known eclipsing binary system consisting of a late B-type dwarf that is regularly eclipsed by a GK subgiant every 2.867 days. Eclipses, which last about 8 hours, are regular enough that predictions for times of minima are published in various places, Sky & Telescope magazine and The Observer's Handbook, for example. But eclipse minimum lasts for less than a half hour, whereas subtle mistakes in the current ephemeris for the star can result in predictions that are off by a few hours or more. The Algol system is fairly complex, with the Algol A and Algol B eclipsing system also orbited by Algol C with an orbital period of nearly 2 years. Added to that are complex long-term O-C variations with a periodicity of almost two centuries that, although suggested by Hoffmeister to be spurious, fit the type of light travel time variations expected for a fourth star also belonging to the system. The AB sub-system also undergoes mass transfer events that add complexities to its O-C behavior. Is it actually possible to predict precise times of eclipse minima for Algol months in advance given such complications, or is it better to encourage ongoing observations of the star so that O-C variations can be tracked in real time?
Predictive rendering for accurate material perception: modeling and rendering fabrics
NASA Astrophysics Data System (ADS)
Bala, Kavita
2012-03-01
In computer graphics, rendering algorithms are used to simulate the appearance of objects and materials in a wide range of applications. Designers and manufacturers rely entirely on these rendered images to previsualize scenes and products before manufacturing them. They need to differentiate between different types of fabrics, paint finishes, plastics, and metals, often with subtle differences, for example, between silk and nylon, formaica and wood. Thus, these applications need predictive algorithms that can produce high-fidelity images that enable such subtle material discrimination.
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
ERIC Educational Resources Information Center
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…
Fast and Accurate Prediction of Stratified Steel Temperature During Holding Period of Ladle
NASA Astrophysics Data System (ADS)
Deodhar, Anirudh; Singh, Umesh; Shukla, Rishabh; Gautham, B. P.; Singh, Amarendra K.
2017-04-01
Thermal stratification of liquid steel in a ladle during the holding period and the teeming operation has a direct bearing on the superheat available at the caster and hence on the caster set points such as casting speed and cooling rates. The changes in the caster set points are typically carried out based on temperature measurements at the end of tundish outlet. Thermal prediction models provide advance knowledge of the influence of process and design parameters on the steel temperature at various stages. Therefore, they can be used in making accurate decisions about the caster set points in real time. However, this requires both fast and accurate thermal prediction models. In this work, we develop a surrogate model for the prediction of thermal stratification using data extracted from a set of computational fluid dynamics (CFD) simulations, pre-determined using design of experiments technique. Regression method is used for training the predictor. The model predicts the stratified temperature profile instantaneously, for a given set of process parameters such as initial steel temperature, refractory heat content, slag thickness, and holding time. More than 96 pct of the predicted values are within an error range of ±5 K (±5 °C), when compared against corresponding CFD results. Considering its accuracy and computational efficiency, the model can be extended for thermal control of casting operations. This work also sets a benchmark for developing similar thermal models for downstream processes such as tundish and caster.
Fast and Accurate Prediction of Stratified Steel Temperature During Holding Period of Ladle
NASA Astrophysics Data System (ADS)
Deodhar, Anirudh; Singh, Umesh; Shukla, Rishabh; Gautham, B. P.; Singh, Amarendra K.
2016-12-01
Thermal stratification of liquid steel in a ladle during the holding period and the teeming operation has a direct bearing on the superheat available at the caster and hence on the caster set points such as casting speed and cooling rates. The changes in the caster set points are typically carried out based on temperature measurements at the end of tundish outlet. Thermal prediction models provide advance knowledge of the influence of process and design parameters on the steel temperature at various stages. Therefore, they can be used in making accurate decisions about the caster set points in real time. However, this requires both fast and accurate thermal prediction models. In this work, we develop a surrogate model for the prediction of thermal stratification using data extracted from a set of computational fluid dynamics (CFD) simulations, pre-determined using design of experiments technique. Regression method is used for training the predictor. The model predicts the stratified temperature profile instantaneously, for a given set of process parameters such as initial steel temperature, refractory heat content, slag thickness, and holding time. More than 96 pct of the predicted values are within an error range of ±5 K (±5 °C), when compared against corresponding CFD results. Considering its accuracy and computational efficiency, the model can be extended for thermal control of casting operations. This work also sets a benchmark for developing similar thermal models for downstream processes such as tundish and caster.
Can numerical simulations accurately predict hydrodynamic instabilities in liquid films?
NASA Astrophysics Data System (ADS)
Denner, Fabian; Charogiannis, Alexandros; Pradas, Marc; van Wachem, Berend G. M.; Markides, Christos N.; Kalliadasis, Serafim
2014-11-01
Understanding the dynamics of hydrodynamic instabilities in liquid film flows is an active field of research in fluid dynamics and non-linear science in general. Numerical simulations offer a powerful tool to study hydrodynamic instabilities in film flows and can provide deep insights into the underlying physical phenomena. However, the direct comparison of numerical results and experimental results is often hampered by several reasons. For instance, in numerical simulations the interface representation is problematic and the governing equations and boundary conditions may be oversimplified, whereas in experiments it is often difficult to extract accurate information on the fluid and its behavior, e.g. determine the fluid properties when the liquid contains particles for PIV measurements. In this contribution we present the latest results of our on-going, extensive study on hydrodynamic instabilities in liquid film flows, which includes direct numerical simulations, low-dimensional modelling as well as experiments. The major focus is on wave regimes, wave height and wave celerity as a function of Reynolds number and forcing frequency of a falling liquid film. Specific attention is paid to the differences in numerical and experimental results and the reasons for these differences. The authors are grateful to the EPSRC for their financial support (Grant EP/K008595/1).
Sengupta, Arkajyoti; Raghavachari, Krishnan
2014-10-14
Accurate modeling of the chemical reactions in many diverse areas such as combustion, photochemistry, or atmospheric chemistry strongly depends on the availability of thermochemical information of the radicals involved. However, accurate thermochemical investigations of radical systems using state of the art composite methods have mostly been restricted to the study of hydrocarbon radicals of modest size. In an alternative approach, systematic error-canceling thermochemical hierarchy of reaction schemes can be applied to yield accurate results for such systems. In this work, we have extended our connectivity-based hierarchy (CBH) method to the investigation of radical systems. We have calibrated our method using a test set of 30 medium sized radicals to evaluate their heats of formation. The CBH-rad30 test set contains radicals containing diverse functional groups as well as cyclic systems. We demonstrate that the sophisticated error-canceling isoatomic scheme (CBH-2) with modest levels of theory is adequate to provide heats of formation accurate to ∼1.5 kcal/mol. Finally, we predict heats of formation of 19 other large and medium sized radicals for which the accuracy of available heats of formation are less well-known.
Accurate upwind-monotone (nonoscillatory) methods for conservation laws
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1992-01-01
The well known MUSCL scheme of Van Leer is constructed using a piecewise linear approximation. The MUSCL scheme is second order accurate at the smooth part of the solution except at extrema where the accuracy degenerates to first order due to the monotonicity constraint. To construct accurate schemes which are free from oscillations, the author introduces the concept of upwind monotonicity. Several classes of schemes, which are upwind monotone and of uniform second or third order accuracy are then presented. Results for advection with constant speed are shown. It is also shown that the new scheme compares favorably with state of the art methods.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.
Objective criteria accurately predict amputation following lower extremity trauma.
Johansen, K; Daines, M; Howey, T; Helfet, D; Hansen, S T
1990-05-01
MESS (Mangled Extremity Severity Score) is a simple rating scale for lower extremity trauma, based on skeletal/soft-tissue damage, limb ischemia, shock, and age. Retrospective analysis of severe lower extremity injuries in 25 trauma victims demonstrated a significant difference between MESS values for 17 limbs ultimately salvaged (mean, 4.88 +/- 0.27) and nine requiring amputation (mean, 9.11 +/- 0.51) (p less than 0.01). A prospective trial of MESS in lower extremity injuries managed at two trauma centers again demonstrated a significant difference between MESS values of 14 salvaged (mean, 4.00 +/- 0.28) and 12 doomed (mean, 8.83 +/- 0.53) limbs (p less than 0.01). In both the retrospective survey and the prospective trial, a MESS value greater than or equal to 7 predicted amputation with 100% accuracy. MESS may be useful in selecting trauma victims whose irretrievably injured lower extremities warrant primary amputation.
Accurate Method for Determining Adhesion of Cantilever Beams
Michalske, T.A.; de Boer, M.P.
1999-01-08
Using surface micromachined samples, we demonstrate the accurate measurement of cantilever beam adhesion by using test structures which are adhered over long attachment lengths. We show that this configuration has a deep energy well, such that a fracture equilibrium is easily reached. When compared to the commonly used method of determining the shortest attached beam, the present method is much less sensitive to variations in surface topography or to details of capillary drying.
Improved Ecosystem Predictions of the California Current System via Accurate Light Calculations
2011-09-30
System via Accurate Light Calculations Curtis D. Mobley Sequoia Scientific, Inc. 2700 Richards Road, Suite 107 Bellevue, WA 98005 phone: 425...incorporate extremely fast but accurate light calculations into coupled physical-biological-optical ocean ecosystem models as used for operational three...dimensional ecosystem predictions. Improvements in light calculations lead to improvements in predictions of chlorophyll concentrations and other
Generating highly accurate prediction hypotheses through collaborative ensemble learning
Arsov, Nino; Pavlovski, Martin; Basnarkov, Lasko; Kocarev, Ljupco
2017-01-01
Ensemble generation is a natural and convenient way of achieving better generalization performance of learning algorithms by gathering their predictive capabilities. Here, we nurture the idea of ensemble-based learning by combining bagging and boosting for the purpose of binary classification. Since the former improves stability through variance reduction, while the latter ameliorates overfitting, the outcome of a multi-model that combines both strives toward a comprehensive net-balancing of the bias-variance trade-off. To further improve this, we alter the bagged-boosting scheme by introducing collaboration between the multi-model’s constituent learners at various levels. This novel stability-guided classification scheme is delivered in two flavours: during or after the boosting process. Applied among a crowd of Gentle Boost ensembles, the ability of the two suggested algorithms to generalize is inspected by comparing them against Subbagging and Gentle Boost on various real-world datasets. In both cases, our models obtained a 40% generalization error decrease. But their true ability to capture details in data was revealed through their application for protein detection in texture analysis of gel electrophoresis images. They achieve improved performance of approximately 0.9773 AUROC when compared to the AUROC of 0.9574 obtained by an SVM based on recursive feature elimination. PMID:28304378
Accurate predictions for the production of vaporized water
Morin, E.; Montel, F.
1995-12-31
The production of water vaporized in the gas phase is controlled by the local conditions around the wellbore. The pressure gradient applied to the formation creates a sharp increase of the molar water content in the hydrocarbon phase approaching the well; this leads to a drop in the pore water saturation around the wellbore. The extent of the dehydrated zone which is formed is the key controlling the bottom-hole content of vaporized water. The maximum water content in the hydrocarbon phase at a given pressure, temperature and salinity is corrected by capillarity or adsorption phenomena depending on the actual water saturation. Describing the mass transfer of the water between the hydrocarbon phases and the aqueous phase into the tubing gives a clear idea of vaporization effects on the formation of scales. Field example are presented for gas fields with temperatures ranging between 140{degrees}C and 180{degrees}C, where water vaporization effects are significant. Conditions for salt plugging in the tubing are predicted.
Generating highly accurate prediction hypotheses through collaborative ensemble learning
NASA Astrophysics Data System (ADS)
Arsov, Nino; Pavlovski, Martin; Basnarkov, Lasko; Kocarev, Ljupco
2017-03-01
Ensemble generation is a natural and convenient way of achieving better generalization performance of learning algorithms by gathering their predictive capabilities. Here, we nurture the idea of ensemble-based learning by combining bagging and boosting for the purpose of binary classification. Since the former improves stability through variance reduction, while the latter ameliorates overfitting, the outcome of a multi-model that combines both strives toward a comprehensive net-balancing of the bias-variance trade-off. To further improve this, we alter the bagged-boosting scheme by introducing collaboration between the multi-model’s constituent learners at various levels. This novel stability-guided classification scheme is delivered in two flavours: during or after the boosting process. Applied among a crowd of Gentle Boost ensembles, the ability of the two suggested algorithms to generalize is inspected by comparing them against Subbagging and Gentle Boost on various real-world datasets. In both cases, our models obtained a 40% generalization error decrease. But their true ability to capture details in data was revealed through their application for protein detection in texture analysis of gel electrophoresis images. They achieve improved performance of approximately 0.9773 AUROC when compared to the AUROC of 0.9574 obtained by an SVM based on recursive feature elimination.
WGS accurately predicts antimicrobial resistance in Escherichia coli
Technology Transfer Automated Retrieval System (TEKTRAN)
Objectives: To determine the effectiveness of whole-genome sequencing (WGS) in identifying resistance genotypes of multidrug-resistant Escherichia coli (E. coli) and whether these correlate with observed phenotypes. Methods: Seventy-six E. coli strains were isolated from farm cattle and measured f...
Stephanou, Pavlos S; Mavrantzas, Vlasis G
2014-06-07
We present a hierarchical computational methodology which permits the accurate prediction of the linear viscoelastic properties of entangled polymer melts directly from the chemical structure, chemical composition, and molecular architecture of the constituent chains. The method entails three steps: execution of long molecular dynamics simulations with moderately entangled polymer melts, self-consistent mapping of the accumulated trajectories onto a tube model and parameterization or fine-tuning of the model on the basis of detailed simulation data, and use of the modified tube model to predict the linear viscoelastic properties of significantly higher molecular weight (MW) melts of the same polymer. Predictions are reported for the zero-shear-rate viscosity η0 and the spectra of storage G'(ω) and loss G″(ω) moduli for several mono and bidisperse cis- and trans-1,4 polybutadiene melts as well as for their MW dependence, and are found to be in remarkable agreement with experimentally measured rheological data.
Testani, Jeffrey M.; Hanberg, Jennifer S.; Cheng, Susan; Rao, Veena; Onyebeke, Chukwuma; Laur, Olga; Kula, Alexander; Chen, Michael; Wilson, F. Perry; Darlington, Andrew; Bellumkonda, Lavanya; Jacoby, Daniel; Tang, W. H. Wilson; Parikh, Chirag R.
2015-01-01
Background Removal of excess sodium and fluid is a primary therapeutic objective in acute decompensated heart failure (ADHF) and commonly monitored with fluid balance and weight loss. However, these parameters are frequently inaccurate or not collected and require a delay of several hours after diuretic administration before they are available. Accessible tools for rapid and accurate prediction of diuretic response are needed. Methods and Results Based on well-established renal physiologic principles an equation was derived to predict net sodium output using a spot urine sample obtained one or two hours following loop diuretic administration. This equation was then prospectively validated in 50 ADHF patients using meticulously obtained timed 6-hour urine collections to quantitate loop diuretic induced cumulative sodium output. Poor natriuretic response was defined as a cumulative sodium output of <50 mmol, a threshold that would result in a positive sodium balance with twice-daily diuretic dosing. Following a median dose of 3 mg (2–4 mg) of intravenous bumetanide, 40% of the population had a poor natriuretic response. The correlation between measured and predicted sodium output was excellent (r=0.91, p<0.0001). Poor natriuretic response could be accurately predicted with the sodium prediction equation (AUC=0.95, 95% CI 0.89–1.0, p<0.0001). Clinically recorded net fluid output had a weaker correlation (r=0.66, p<0.001) and lesser ability to predict poor natriuretic response (AUC=0.76, 95% CI 0.63–0.89, p=0.002). Conclusions In patients being treated for ADHF, poor natriuretic response can be predicted soon after diuretic administration with excellent accuracy using a spot urine sample. PMID:26721915
Can Self-Organizing Maps Accurately Predict Photometric Redshifts?
NASA Astrophysics Data System (ADS)
Way, M. J.; Klose, C. D.
2012-03-01
We present an unsupervised machine-learning approach that can be employed for estimating photometric redshifts. The proposed method is based on a vector quantization called the self-organizing-map (SOM) approach. A variety of photometrically derived input values were utilized from the Sloan Digital Sky Survey’s main galaxy sample, luminous red galaxy, and quasar samples, along with the PHAT0 data set from the Photo- z Accuracy Testing project. Regression results obtained with this new approach were evaluated in terms of root-mean-square error (RMSE) to estimate the accuracy of the photometric redshift estimates. The results demonstrate competitive RMSE and outlier percentages when compared with several other popular approaches, such as artificial neural networks and Gaussian process regression. SOM RMSE results (using Δz = zphot - zspec) are 0.023 for the main galaxy sample, 0.027 for the luminous red galaxy sample, 0.418 for quasars, and 0.022 for PHAT0 synthetic data. The results demonstrate that there are nonunique solutions for estimating SOM RMSEs. Further research is needed in order to find more robust estimation techniques using SOMs, but the results herein are a positive indication of their capabilities when compared with other well-known methods.
2015-01-01
Background Biclustering is a popular method for identifying under which experimental conditions biological signatures are co-expressed. However, the general biclustering problem is NP-hard, offering room to focus algorithms on specific biological tasks. We hypothesize that conditional co-regulation of genes is a key factor in determining cell phenotype and that accurately segregating conditions in biclusters will improve such predictions. Thus, we developed a bicluster sampled coherence metric (BSCM) for determining which conditions and signals should be included in a bicluster. Results Our BSCM calculates condition and cluster size specific p-values, and we incorporated these into the popular integrated biclustering algorithm cMonkey. We demonstrate that incorporation of our new algorithm significantly improves bicluster co-regulation scores (p-value = 0.009) and GO annotation scores (p-value = 0.004). Additionally, we used a bicluster based signal to predict whether a given experimental condition will result in yeast peroxisome induction. Using the new algorithm, the classifier accuracy improves from 41.9% to 76.1% correct. Conclusions We demonstrate that the proposed BSCM helps determine which signals ought to be co-clustered, resulting in more accurately assigned bicluster membership. Furthermore, we show that BSCM can be extended to more accurately detect under which experimental conditions the genes are co-clustered. Features derived from this more accurate analysis of conditional regulation results in a dramatic improvement in the ability to predict a cellular phenotype in yeast. The latest cMonkey is available for download at https://github.com/baliga-lab/cmonkey2. The experimental data and source code featured in this paper is available http://AitchisonLab.com/BSCM. BSCM has been incorporated in the official cMonkey release. PMID:25881257
Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; Pronobis, Wiktor; von Lilienfeld, O Anatole; Müller, Klaus-Robert; Tkatchenko, Alexandre
2015-06-18
Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. In addition, the same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.
Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; ...
2015-06-04
Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstratemore » prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.« less
Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; Pronobis, Wiktor; von Lilienfeld, O. Anatole; Müller, Klaus -Robert; Tkatchenko, Alexandre
2015-06-04
Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.
NASA Astrophysics Data System (ADS)
Ben Ali, Jaouher; Chebel-Morello, Brigitte; Saidi, Lotfi; Malinowski, Simon; Fnaiech, Farhat
2015-05-01
Accurate remaining useful life (RUL) prediction of critical assets is an important challenge in condition based maintenance to improve reliability and decrease machine's breakdown and maintenance's cost. Bearing is one of the most important components in industries which need to be monitored and the user should predict its RUL. The challenge of this study is to propose an original feature able to evaluate the health state of bearings and to estimate their RUL by Prognostics and Health Management (PHM) techniques. In this paper, the proposed method is based on the data-driven prognostic approach. The combination of Simplified Fuzzy Adaptive Resonance Theory Map (SFAM) neural network and Weibull distribution (WD) is explored. WD is used just in the training phase to fit measurement and to avoid areas of fluctuation in the time domain. SFAM training process is based on fitted measurements at present and previous inspection time points as input. However, the SFAM testing process is based on real measurements at present and previous inspections. Thanks to the fuzzy learning process, SFAM has an important ability and a good performance to learn nonlinear time series. As output, seven classes are defined; healthy bearing and six states for bearing degradation. In order to find the optimal RUL prediction, a smoothing phase is proposed in this paper. Experimental results show that the proposed method can reliably predict the RUL of rolling element bearings (REBs) based on vibration signals. The proposed prediction approach can be applied to prognostic other various mechanical assets.
CT Scan Method Accurately Assesses Humeral Head Retroversion
Boileau, P.; Mazzoleni, N.; Walch, G.; Urien, J. P.
2008-01-01
Humeral head retroversion is not well described with the literature controversial regarding accuracy of measurement methods and ranges of normal values. We therefore determined normal humeral head retroversion and assessed the measurement methods. We measured retroversion in 65 cadaveric humeri, including 52 paired specimens, using four methods: radiographic, computed tomography (CT) scan, computer-assisted, and direct methods. We also assessed the distance between the humeral head central axis and the bicipital groove. CT scan methods accurately measure humeral head retroversion, while radiographic methods do not. The retroversion with respect to the transepicondylar axis was 17.9° and 21.5° with respect to the trochlear tangent axis. The difference between the right and left humeri was 8.9°. The distance between the central axis of the humeral head and the bicipital groove was 7.0 mm and was consistent between right and left humeri. Humeral head retroversion may be most accurately obtained using the patient’s own anatomic landmarks or, if not, identifiable retroversion as measured by those landmarks on contralateral side or the bicipital groove. PMID:18264854
Motor degradation prediction methods
Arnold, J.R.; Kelly, J.F.; Delzingaro, M.J.
1996-12-01
Motor Operated Valve (MOV) squirrel cage AC motor rotors are susceptible to degradation under certain conditions. Premature failure can result due to high humidity/temperature environments, high running load conditions, extended periods at locked rotor conditions (i.e. > 15 seconds) or exceeding the motor`s duty cycle by frequent starts or multiple valve stroking. Exposure to high heat and moisture due to packing leaks, pressure seal ring leakage or other causes can significantly accelerate the degradation. ComEd and Liberty Technologies have worked together to provide and validate a non-intrusive method using motor power diagnostics to evaluate MOV rotor condition and predict failure. These techniques have provided a quick, low radiation dose method to evaluate inaccessible motors, identify degradation and allow scheduled replacement of motors prior to catastrophic failures.
Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model
Li, Zhen; Zhang, Renyu
2017-01-01
Motivation Protein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. Method This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. The first residual network conducts a series of 1-dimensional convolutional transformation of sequential features; the second residual network conducts a series of 2-dimensional convolutional transformation of pairwise information including output of the first residual network, EC information and pairwise potential. By using very deep residual networks, we can accurately model contact occurrence patterns and complex sequence-structure relationship and thus, obtain higher-quality contact prediction regardless of how many sequence homologs are available for proteins in question. Results Our method greatly outperforms existing methods and leads to much more accurate contact-assisted folding. Tested on 105 CASP11 targets, 76 past CAMEO hard targets, and 398 membrane proteins, the average top L long-range prediction accuracy obtained by our method, one representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints but without any force fields can yield correct folds (i.e., TMscore>0.6) for 203 of the 579 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them, respectively. Our contact
Ihm, Yungok; Cooper, Valentino R; Gallego, Nidia C; Contescu, Cristian I; Morris, James R
2014-01-01
We demonstrate a successful, efficient framework for predicting gas adsorption properties in real materials based on first-principles calculations, with a specific comparison of experiment and theory for methane adsorption in activated carbons. These carbon materials have different pore size distributions, leading to a variety of uptake characteristics. Utilizing these distributions, we accurately predict experimental uptakes and heats of adsorption without empirical potentials or lengthy simulations. We demonstrate that materials with smaller pores have higher heats of adsorption, leading to a higher gas density in these pores. This pore-size dependence must be accounted for, in order to predict and understand the adsorption behavior. The theoretical approach combines: (1) ab initio calculations with a van der Waals density functional to determine adsorbent-adsorbate interactions, and (2) a thermodynamic method that predicts equilibrium adsorption densities by directly incorporating the calculated potential energy surface in a slit pore model. The predicted uptake at P=20 bar and T=298 K is in excellent agreement for all five activated carbon materials used. This approach uses only the pore-size distribution as an input, with no fitting parameters or empirical adsorbent-adsorbate interactions, and thus can be easily applied to other adsorbent-adsorbate combinations.
SIFTER search: a web server for accurate phylogeny-based protein function prediction
Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.
2015-01-01
We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. The SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded. PMID:25979264
SIFTER search: a web server for accurate phylogeny-based protein function prediction
Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.
2015-05-15
We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.
SIFTER search: a web server for accurate phylogeny-based protein function prediction
Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.
2015-05-15
We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access tomore » precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.« less
NASA Technical Reports Server (NTRS)
Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris
2011-01-01
A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.
Mapping methods for computationally efficient and accurate structural reliability
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1992-01-01
Mapping methods are developed to improve the accuracy and efficiency of probabilistic structural analyses with coarse finite element meshes. The mapping methods consist of the following: (1) deterministic structural analyses with fine (convergent) finite element meshes; (2) probabilistic structural analyses with coarse finite element meshes; (3) the relationship between the probabilistic structural responses from the coarse and fine finite element meshes; and (4) a probabilistic mapping. The results show that the scatter in the probabilistic structural responses and structural reliability can be efficiently predicted using a coarse finite element model and proper mapping methods with good accuracy. Therefore, large structures can be efficiently analyzed probabilistically using finite element methods.
Schmidt, Florian; Gasparoni, Nina; Gasparoni, Gilles; Gianmoena, Kathrin; Cadenas, Cristina; Polansky, Julia K; Ebert, Peter; Nordström, Karl; Barann, Matthias; Sinha, Anupam; Fröhler, Sebastian; Xiong, Jieyi; Dehghani Amirabad, Azim; Behjati Ardakani, Fatemeh; Hutter, Barbara; Zipprich, Gideon; Felder, Bärbel; Eils, Jürgen; Brors, Benedikt; Chen, Wei; Hengstler, Jan G; Hamann, Alf; Lengauer, Thomas; Rosenstiel, Philip; Walter, Jörn; Schulz, Marcel H
2017-01-09
The binding and contribution of transcription factors (TF) to cell specific gene expression is often deduced from open-chromatin measurements to avoid costly TF ChIP-seq assays. Thus, it is important to develop computational methods for accurate TF binding prediction in open-chromatin regions (OCRs). Here, we report a novel segmentation-based method, TEPIC, to predict TF binding by combining sets of OCRs with position weight matrices. TEPIC can be applied to various open-chromatin data, e.g. DNaseI-seq and NOMe-seq. Additionally, Histone-Marks (HMs) can be used to identify candidate TF binding sites. TEPIC computes TF affinities and uses open-chromatin/HM signal intensity as quantitative measures of TF binding strength. Using machine learning, we find low affinity binding sites to improve our ability to explain gene expression variability compared to the standard presence/absence classification of binding sites. Further, we show that both footprints and peaks capture essential TF binding events and lead to a good prediction performance. In our application, gene-based scores computed by TEPIC with one open-chromatin assay nearly reach the quality of several TF ChIP-seq data sets. Finally, these scores correctly predict known transcriptional regulators as illustrated by the application to novel DNaseI-seq and NOMe-seq data for primary human hepatocytes and CD4+ T-cells, respectively.
Schmidt, Florian; Gasparoni, Nina; Gasparoni, Gilles; Gianmoena, Kathrin; Cadenas, Cristina; Polansky, Julia K.; Ebert, Peter; Nordström, Karl; Barann, Matthias; Sinha, Anupam; Fröhler, Sebastian; Xiong, Jieyi; Dehghani Amirabad, Azim; Behjati Ardakani, Fatemeh; Hutter, Barbara; Zipprich, Gideon; Felder, Bärbel; Eils, Jürgen; Brors, Benedikt; Chen, Wei; Hengstler, Jan G.; Hamann, Alf; Lengauer, Thomas; Rosenstiel, Philip; Walter, Jörn; Schulz, Marcel H.
2017-01-01
The binding and contribution of transcription factors (TF) to cell specific gene expression is often deduced from open-chromatin measurements to avoid costly TF ChIP-seq assays. Thus, it is important to develop computational methods for accurate TF binding prediction in open-chromatin regions (OCRs). Here, we report a novel segmentation-based method, TEPIC, to predict TF binding by combining sets of OCRs with position weight matrices. TEPIC can be applied to various open-chromatin data, e.g. DNaseI-seq and NOMe-seq. Additionally, Histone-Marks (HMs) can be used to identify candidate TF binding sites. TEPIC computes TF affinities and uses open-chromatin/HM signal intensity as quantitative measures of TF binding strength. Using machine learning, we find low affinity binding sites to improve our ability to explain gene expression variability compared to the standard presence/absence classification of binding sites. Further, we show that both footprints and peaks capture essential TF binding events and lead to a good prediction performance. In our application, gene-based scores computed by TEPIC with one open-chromatin assay nearly reach the quality of several TF ChIP-seq data sets. Finally, these scores correctly predict known transcriptional regulators as illustrated by the application to novel DNaseI-seq and NOMe-seq data for primary human hepatocytes and CD4+ T-cells, respectively. PMID:27899623
An Accurate and Efficient Method of Computing Differential Seismograms
NASA Astrophysics Data System (ADS)
Hu, S.; Zhu, L.
2013-12-01
Inversion of seismic waveforms for Earth structure usually requires computing partial derivatives of seismograms with respect to velocity model parameters. We developed an accurate and efficient method to calculate differential seismograms for multi-layered elastic media, based on the Thompson-Haskell propagator matrix technique. We first derived the partial derivatives of the Haskell matrix and its compound matrix respect to the layer parameters (P wave velocity, shear wave velocity and density). We then derived the partial derivatives of surface displacement kernels in the frequency-wavenumber domain. The differential seismograms are obtained by using the frequency-wavenumber double integration method. The implementation is computationally efficient and the total computing time is proportional to the time of computing the seismogram itself, i.e., independent of the number of layers in the model. We verified the correctness of results by comparing with differential seismograms computed using the finite differences method. Our results are more accurate because of the analytical nature of the derived partial derivatives.
Accurate optical CD profiler based on specialized finite element method
NASA Astrophysics Data System (ADS)
Carrero, Jesus; Perçin, Gökhan
2012-03-01
As the semiconductor industry is moving to very low-k1 patterning solutions, the metrology problems facing process engineers are becoming much more complex. Choosing the right optical critical dimension (OCD) metrology technique is essential for bridging the metrology gap and achieving the required manufacturing volume throughput. The critical dimension scanning electron microscope (CD-SEM) measurement is usually distorted by the high aspect ratio of the photoresist and hard mask layers. CD-SEM measurements cease to correlate with complex three-dimensional profiles, such as the cases for double patterning and FinFETs, thus necessitating sophisticated, accurate and fast computational methods to bridge the gap. In this work, a suite of computational methods that complement advanced OCD equipment, and enabling them to operate at higher accuracies, are developed. In this article, a novel method for accurately modeling OCD profiles is presented. A finite element formulation in primal form is used to discretize the equations. The implementation uses specialized finite element spaces to solve Maxwell equations in two dimensions.
Method for Accurate Surface Temperature Measurements During Fast Induction Heating
NASA Astrophysics Data System (ADS)
Larregain, Benjamin; Vanderesse, Nicolas; Bridier, Florent; Bocher, Philippe; Arkinson, Patrick
2013-07-01
A robust method is proposed for the measurement of surface temperature fields during induction heating. It is based on the original coupling of temperature-indicating lacquers and a high-speed camera system. Image analysis tools have been implemented to automatically extract the temporal evolution of isotherms. This method was applied to the fast induction treatment of a 4340 steel spur gear, allowing the full history of surface isotherms to be accurately documented for a sequential heating, i.e., a medium frequency preheating followed by a high frequency final heating. Three isotherms, i.e., 704, 816, and 927°C, were acquired every 0.3 ms with a spatial resolution of 0.04 mm per pixel. The information provided by the method is described and discussed. Finally, the transformation temperature Ac1 is linked to the temperature on specific locations of the gear tooth.
Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru
2016-01-01
Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research. PMID:27571061
Novel dispersion tolerant interferometry method for accurate measurements of displacement
NASA Astrophysics Data System (ADS)
Bradu, Adrian; Maria, Michael; Leick, Lasse; Podoleanu, Adrian G.
2015-05-01
We demonstrate that the recently proposed master-slave interferometry method is able to provide true dispersion free depth profiles in a spectrometer-based set-up that can be used for accurate displacement measurements in sensing and optical coherence tomography. The proposed technique is based on correlating the channelled spectra produced by the linear camera in the spectrometer with previously recorded masks. As such technique is not based on Fourier transformations (FT), it does not require any resampling of data and is immune to any amounts of dispersion left unbalanced in the system. In order to prove the tolerance of technique to dispersion, different lengths of optical fiber are used in the interferometer to introduce dispersion and it is demonstrated that neither the sensitivity profile versus optical path difference (OPD) nor the depth resolution are affected. In opposition, it is shown that the classical FT based methods using calibrated data provide less accurate optical path length measurements and exhibit a quicker decays of sensitivity with OPD.
Accurate finite difference methods for time-harmonic wave propagation
NASA Technical Reports Server (NTRS)
Harari, Isaac; Turkel, Eli
1994-01-01
Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.
Accurate prediction of the response of freshwater fish to a mixture of estrogenic chemicals.
Brian, Jayne V; Harris, Catherine A; Scholze, Martin; Backhaus, Thomas; Booy, Petra; Lamoree, Marja; Pojana, Giulio; Jonkers, Niels; Runnalls, Tamsin; Bonfà, Angela; Marcomini, Antonio; Sumpter, John P
2005-06-01
Existing environmental risk assessment procedures are limited in their ability to evaluate the combined effects of chemical mixtures. We investigated the implications of this by analyzing the combined effects of a multicomponent mixture of five estrogenic chemicals using vitellogenin induction in male fathead minnows as an end point. The mixture consisted of estradiol, ethynylestradiol, nonylphenol, octylphenol, and bisphenol A. We determined concentration-response curves for each of the chemicals individually. The chemicals were then combined at equipotent concentrations and the mixture tested using fixed-ratio design. The effects of the mixture were compared with those predicted by the model of concentration addition using biomathematical methods, which revealed that there was no deviation between the observed and predicted effects of the mixture. These findings demonstrate that estrogenic chemicals have the capacity to act together in an additive manner and that their combined effects can be accurately predicted by concentration addition. We also explored the potential for mixture effects at low concentrations by exposing the fish to each chemical at one-fifth of its median effective concentration (EC50). Individually, the chemicals did not induce a significant response, although their combined effects were consistent with the predictions of concentration addition. This demonstrates the potential for estrogenic chemicals to act additively at environmentally relevant concentrations. These findings highlight the potential for existing environmental risk assessment procedures to underestimate the hazard posed by mixtures of chemicals that act via a similar mode of action, thereby leading to erroneous conclusions of absence of risk.
Accurate Prediction of the Response of Freshwater Fish to a Mixture of Estrogenic Chemicals
Brian, Jayne V.; Harris, Catherine A.; Scholze, Martin; Backhaus, Thomas; Booy, Petra; Lamoree, Marja; Pojana, Giulio; Jonkers, Niels; Runnalls, Tamsin; Bonfà, Angela; Marcomini, Antonio; Sumpter, John P.
2005-01-01
Existing environmental risk assessment procedures are limited in their ability to evaluate the combined effects of chemical mixtures. We investigated the implications of this by analyzing the combined effects of a multicomponent mixture of five estrogenic chemicals using vitellogenin induction in male fathead minnows as an end point. The mixture consisted of estradiol, ethynylestradiol, nonylphenol, octylphenol, and bisphenol A. We determined concentration–response curves for each of the chemicals individually. The chemicals were then combined at equipotent concentrations and the mixture tested using fixed-ratio design. The effects of the mixture were compared with those predicted by the model of concentration addition using biomathematical methods, which revealed that there was no deviation between the observed and predicted effects of the mixture. These findings demonstrate that estrogenic chemicals have the capacity to act together in an additive manner and that their combined effects can be accurately predicted by concentration addition. We also explored the potential for mixture effects at low concentrations by exposing the fish to each chemical at one-fifth of its median effective concentration (EC50). Individually, the chemicals did not induce a significant response, although their combined effects were consistent with the predictions of concentration addition. This demonstrates the potential for estrogenic chemicals to act additively at environmentally relevant concentrations. These findings highlight the potential for existing environmental risk assessment procedures to underestimate the hazard posed by mixtures of chemicals that act via a similar mode of action, thereby leading to erroneous conclusions of absence of risk. PMID:15929895
An Accurate Projector Calibration Method Based on Polynomial Distortion Representation
Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua
2015-01-01
In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247
Garcia Lopez, Sebastian; Kim, Philip M.
2014-01-01
Advances in sequencing have led to a rapid accumulation of mutations, some of which are associated with diseases. However, to draw mechanistic conclusions, a biochemical understanding of these mutations is necessary. For coding mutations, accurate prediction of significant changes in either the stability of proteins or their affinity to their binding partners is required. Traditional methods have used semi-empirical force fields, while newer methods employ machine learning of sequence and structural features. Here, we show how combining both of these approaches leads to a marked boost in accuracy. We introduce ELASPIC, a novel ensemble machine learning approach that is able to predict stability effects upon mutation in both, domain cores and domain-domain interfaces. We combine semi-empirical energy terms, sequence conservation, and a wide variety of molecular details with a Stochastic Gradient Boosting of Decision Trees (SGB-DT) algorithm. The accuracy of our predictions surpasses existing methods by a considerable margin, achieving correlation coefficients of 0.77 for stability, and 0.75 for affinity predictions. Notably, we integrated homology modeling to enable proteome-wide prediction and show that accurate prediction on modeled structures is possible. Lastly, ELASPIC showed significant differences between various types of disease-associated mutations, as well as between disease and common neutral mutations. Unlike pure sequence-based prediction methods that try to predict phenotypic effects of mutations, our predictions unravel the molecular details governing the protein instability, and help us better understand the molecular causes of diseases. PMID:25243403
NASA Astrophysics Data System (ADS)
Garrison, Stephen L.
2005-07-01
The combination of molecular simulations and potentials obtained from quantum chemistry is shown to be able to provide reasonably accurate thermodynamic property predictions. Gibbs ensemble Monte Carlo simulations are used to understand the effects of small perturbations to various regions of the model Lennard-Jones 12-6 potential. However, when the phase behavior and second virial coefficient are scaled by the critical properties calculated for each potential, the results obey a corresponding states relation suggesting a non-uniqueness problem for interaction potentials fit to experimental phase behavior. Several variations of a procedure collectively referred to as quantum mechanical Hybrid Methods for Interaction Energies (HM-IE) are developed and used to accurately estimate interaction energies from CCSD(T) calculations with a large basis set in a computationally efficient manner for the neon-neon, acetylene-acetylene, and nitrogen-benzene systems. Using these results and methods, an ab initio, pairwise-additive, site-site potential for acetylene is determined and then improved using results from molecular simulations using this initial potential. The initial simulation results also indicate that a limited range of energies important for accurate phase behavior predictions. Second virial coefficients calculated from the improved potential indicate that one set of experimental data in the literature is likely erroneous. This prescription is then applied to methanethiol. Difficulties in modeling the effects of the lone pair electrons suggest that charges on the lone pair sites negatively impact the ability of the intermolecular potential to describe certain orientations, but that the lone pair sites may be necessary to reasonably duplicate the interaction energies for several orientations. Two possible methods for incorporating the effects of three-body interactions into simulations within the pairwise-additivity formulation are also developed. A low density
An Effective Method to Accurately Calculate the Phase Space Factors for β - β - Decay
Neacsu, Andrei; Horoi, Mihai
2016-01-01
Accurate calculations of the electron phase space factors are necessary for reliable predictions of double-beta decay rates and for the analysis of the associated electron angular and energy distributions. We present an effective method to calculate these phase space factors that takes into account the distorted Coulomb field of the daughter nucleus, yet it allows one to easily calculate the phase space factors with good accuracy relative to the most exact methods available in the recent literature.
Ad hoc methods for accurate determination of Bader's atomic boundary
NASA Astrophysics Data System (ADS)
Polestshuk, Pavel M.
2013-08-01
In addition to the recently published triangulation method [P. M. Polestshuk, J. Comput. Chem. 34, 206 (2013)], 10.1002/jcc.23121, two new highly accurate approaches, ZFSX and SINTY, for the integration over an atomic region covered by a zero-flux surface (zfs) were developed and efficiently interfaced into the TWOE program. ZFSX method was realized as three independent modules (ZFSX-1, ZFSX-3, and ZFSX-5) handling interatomic surfaces of a different complexity. Details of algorithmic implementation of ZFSX and SINTY are discussed. A special attention to an extended analysis of errors in calculations of atomic properties is paid. It was shown that uncertainties in zfs determination caused by ZFSX and SINTY approaches contribute negligibly (less than 10-6 a.u.) to the total atomic integration errors. Moreover, the new methods are able to evaluate atomic integrals with a reasonable time and can be universally applied for the systems of any complexity. It is suggested, therefore, that ZFSX and SINTY can be regarded as benchmark methods for the computation of any Quantum Theory of Atoms in Molecules atomic property.
An accurate moving boundary formulation in cut-cell methods
NASA Astrophysics Data System (ADS)
Schneiders, Lennart; Hartmann, Daniel; Meinke, Matthias; Schröder, Wolfgang
2013-02-01
A cut-cell method for Cartesian meshes to simulate viscous compressible flows with moving boundaries is presented. We focus on eliminating unphysical oscillations occurring in Cartesian grid methods extended to moving-boundary problems. In these methods, cells either lie completely in the fluid or solid region or are intersected by the boundary. For the latter cells, the time dependent volume fraction lying in the fluid region can be so small that explicit time-integration schemes become unstable and a special treatment of these cells is necessary. When the boundary moves, a fluid cell may become a cut cell or a solid cell may become a small cell at the next time level. This causes an abrupt change in the discretization operator and a suddenly modified truncation error of the numerical scheme. This temporally discontinuous alteration is shown to act like an unphysical source term, which deteriorates the numerical solution, i.e., it generates unphysical oscillations in the hydrodynamic forces exerted on the moving boundary. We develop an accurate moving boundary formulation based on the varying discretization operators yielding a cut-cell method which avoids these discontinuities. Results for canonical two- and three-dimensional test cases evidence the accuracy and robustness of the newly developed scheme.
Ad hoc methods for accurate determination of Bader's atomic boundary.
Polestshuk, Pavel M
2013-08-07
In addition to the recently published triangulation method [P. M. Polestshuk, J. Comput. Chem. 34, 206 (2013)], two new highly accurate approaches, ZFSX and SINTY, for the integration over an atomic region covered by a zero-flux surface (zfs) were developed and efficiently interfaced into the TWOE program. ZFSX method was realized as three independent modules (ZFSX-1, ZFSX-3, and ZFSX-5) handling interatomic surfaces of a different complexity. Details of algorithmic implementation of ZFSX and SINTY are discussed. A special attention to an extended analysis of errors in calculations of atomic properties is paid. It was shown that uncertainties in zfs determination caused by ZFSX and SINTY approaches contribute negligibly (less than 10(-6) a.u.) to the total atomic integration errors. Moreover, the new methods are able to evaluate atomic integrals with a reasonable time and can be universally applied for the systems of any complexity. It is suggested, therefore, that ZFSX and SINTY can be regarded as benchmark methods for the computation of any Quantum Theory of Atoms in Molecules atomic property.
Accurate measurement method for tube's endpoints based on machine vision
NASA Astrophysics Data System (ADS)
Liu, Shaoli; Jin, Peng; Liu, Jianhua; Wang, Xiao; Sun, Peng
2017-01-01
Tubes are used widely in aerospace vehicles, and their accurate assembly can directly affect the assembling reliability and the quality of products. It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly. However, the traditional tube inspection method is time-consuming and complex operations. Therefore, a new measurement method for a tube's endpoints based on machine vision is proposed. First, reflected light on tube's surface can be removed by using photometric linearization. Then, based on the optimization model for the tube's endpoint measurements and the principle of stereo matching, the global coordinates and the relative distance of the tube's endpoint are obtained. To confirm the feasibility, 11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured. The experiment results show that the measurement repeatability accuracy is 0.167 mm, and the absolute accuracy is 0.328 mm. The measurement takes less than 1 min. The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement.
Fast and accurate pressure-drop prediction in straightened atherosclerotic coronary arteries.
Schrauwen, Jelle T C; Koeze, Dion J; Wentzel, Jolanda J; van de Vosse, Frans N; van der Steen, Anton F W; Gijsen, Frank J H
2015-01-01
Atherosclerotic disease progression in coronary arteries is influenced by wall shear stress. To compute patient-specific wall shear stress, computational fluid dynamics (CFD) is required. In this study we propose a method for computing the pressure-drop in regions proximal and distal to a plaque, which can serve as a boundary condition in CFD. As a first step towards exploring the proposed method we investigated ten straightened coronary arteries. First, the flow fields were calculated with CFD and velocity profiles were fitted on the results. Second, the Navier-Stokes equation was simplified and solved with the found velocity profiles to obtain a pressure-drop estimate (Δp (1)). Next, Δp (1) was compared to the pressure-drop from CFD (Δp CFD) as a validation step. Finally, the velocity profiles, and thus the pressure-drop were predicted based on geometry and flow, resulting in Δp geom. We found that Δp (1) adequately estimated Δp CFD with velocity profiles that have one free parameter β. This β was successfully related to geometry and flow, resulting in an excellent agreement between Δp CFD and Δp geom: 3.9 ± 4.9% difference at Re = 150. We showed that this method can quickly and accurately predict pressure-drop on the basis of geometry and flow in straightened coronary arteries that are mildly diseased.
NASA Astrophysics Data System (ADS)
Deep, Prakash; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter
2016-05-01
printability of defects at wafer level and automates the process of defect dispositioning from images captured using high resolution inspection machine. It first eliminates false defects due to registration, focus errors, image capture errors and random noise caused during inspection. For the remaining real defects, actual mask-like contours are generated using the Calibre® ILT solution [1][2], which is enhanced to predict the actual mask contours from high resolution defect images. It enables accurate prediction of defect contours, which is not possible from images captured using inspection machine because some information is already lost due to optical effects. Calibre's simulation engine is used to generate images at wafer level using scanner optical conditions and mask-like contours as input. The tool then analyses simulated images and predicts defect printability. It automatically calculates maximum CD variation and decides which defects are severe to affect patterns on wafer. In this paper, we assess the printability of defects for the mask of advanced technology nodes. In particular, we will compare the recovered mask contours with contours extracted from SEM image of the mask and compare simulation results with AIMSTM for a variety of defects and patterns. The results of printability assessment and the accuracy of comparison are presented in this paper. We also suggest how this method can be extended to predict printability of defects identified on EUV photomasks.
1994-12-31
This conference was held December 4--8, 1994 in Asilomar, California. The purpose of this meeting was to provide a forum for exchange of state-of-the-art information concerning the prediction of protein structure. Attention if focused on the following: comparative modeling; sequence to fold assignment; and ab initio folding.
Accurate Prediction of One-Dimensional Protein Structure Features Using SPINE-X.
Faraggi, Eshel; Kloczkowski, Andrzej
2017-01-01
Accurate prediction of protein secondary structure and other one-dimensional structure features is essential for accurate sequence alignment, three-dimensional structure modeling, and function prediction. SPINE-X is a software package to predict secondary structure as well as accessible surface area and dihedral angles ϕ and ψ. For secondary structure SPINE-X achieves an accuracy of between 81 and 84 % depending on the dataset and choice of tests. The Pearson correlation coefficient for accessible surface area prediction is 0.75 and the mean absolute error from the ϕ and ψ dihedral angles are 20(∘) and 33(∘), respectively. The source code and a Linux executables for SPINE-X are available from Research and Information Systems at http://mamiris.com .
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.
Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish
2016-04-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy.
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina
Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish
2016-01-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143
Accurate load prediction by BEM with airfoil data from 3D RANS simulations
NASA Astrophysics Data System (ADS)
Schneider, Marc S.; Nitzsche, Jens; Hennings, Holger
2016-09-01
In this paper, two methods for the extraction of airfoil coefficients from 3D CFD simulations of a wind turbine rotor are investigated, and these coefficients are used to improve the load prediction of a BEM code. The coefficients are extracted from a number of steady RANS simulations, using either averaging of velocities in annular sections, or an inverse BEM approach for determination of the induction factors in the rotor plane. It is shown that these 3D rotor polars are able to capture the rotational augmentation at the inner part of the blade as well as the load reduction by 3D effects close to the blade tip. They are used as input to a simple BEM code and the results of this BEM with 3D rotor polars are compared to the predictions of BEM with 2D airfoil coefficients plus common empirical corrections for stall delay and tip loss. While BEM with 2D airfoil coefficients produces a very different radial distribution of loads than the RANS simulation, the BEM with 3D rotor polars manages to reproduce the loads from RANS very accurately for a variety of load cases, as long as the blade pitch angle is not too different from the cases from which the polars were extracted.
Quantifying Methane Fluxes Simply and Accurately: The Tracer Dilution Method
NASA Astrophysics Data System (ADS)
Rella, Christopher; Crosson, Eric; Green, Roger; Hater, Gary; Dayton, Dave; Lafleur, Rick; Merrill, Ray; Tan, Sze; Thoma, Eben
2010-05-01
Methane is an important atmospheric constituent with a wide variety of sources, both natural and anthropogenic, including wetlands and other water bodies, permafrost, farms, landfills, and areas with significant petrochemical exploration, drilling, transport, or processing, or refining occurs. Despite its importance to the carbon cycle, its significant impact as a greenhouse gas, and its ubiquity in modern life as a source of energy, its sources and sinks in marine and terrestrial ecosystems are only poorly understood. This is largely because high quality, quantitative measurements of methane fluxes in these different environments have not been available, due both to the lack of robust field-deployable instrumentation as well as to the fact that most significant sources of methane extend over large areas (from 10's to 1,000,000's of square meters) and are heterogeneous emitters - i.e., the methane is not emitted evenly over the area in question. Quantifying the total methane emissions from such sources becomes a tremendous challenge, compounded by the fact that atmospheric transport from emission point to detection point can be highly variable. In this presentation we describe a robust, accurate, and easy-to-deploy technique called the tracer dilution method, in which a known gas (such as acetylene, nitrous oxide, or sulfur hexafluoride) is released in the same vicinity of the methane emissions. Measurements of methane and the tracer gas are then made downwind of the release point, in the so-called far-field, where the area of methane emissions cannot be distinguished from a point source (i.e., the two gas plumes are well-mixed). In this regime, the methane emissions are given by the ratio of the two measured concentrations, multiplied by the known tracer emission rate. The challenges associated with atmospheric variability and heterogeneous methane emissions are handled automatically by the transport and dispersion of the tracer. We present detailed methane flux
NASA Astrophysics Data System (ADS)
Rey, M.; Nikitin, A. V.; Tyuterev, V.
2014-06-01
Knowledge of near infrared intensities of rovibrational transitions of polyatomic molecules is essential for the modeling of various planetary atmospheres, brown dwarfs and for other astrophysical applications 1,2,3. For example, to analyze exoplanets, atmospheric models have been developed, thus making the need to provide accurate spectroscopic data. Consequently, the spectral characterization of such planetary objects relies on the necessity of having adequate and reliable molecular data in extreme conditions (temperature, optical path length, pressure). On the other hand, in the modeling of astrophysical opacities, millions of lines are generally involved and the line-by-line extraction is clearly not feasible in laboratory measurements. It is thus suggested that this large amount of data could be interpreted only by reliable theoretical predictions. There exists essentially two theoretical approaches for the computation and prediction of spectra. The first one is based on empirically-fitted effective spectroscopic models. Another way for computing energies, line positions and intensities is based on global variational calculations using ab initio surfaces. They do not yet reach the spectroscopic accuracy stricto sensu but implicitly account for all intramolecular interactions including resonance couplings in a wide spectral range. The final aim of this work is to provide reliable predictions which could be quantitatively accurate with respect to the precision of available observations and as complete as possible. All this thus requires extensive first-principles quantum mechanical calculations essentially based on three necessary ingredients which are (i) accurate intramolecular potential energy surface and dipole moment surface components well-defined in a large range of vibrational displacements and (ii) efficient computational methods combined with suitable choices of coordinates to account for molecular symmetry properties and to achieve a good numerical
NASA Astrophysics Data System (ADS)
Powell, Jacob; Heider, Emily C.; Campiglia, Andres; Harper, James K.
2016-10-01
The ability of density functional theory (DFT) methods to predict accurate fluorescence spectra for polycyclic aromatic hydrocarbons (PAHs) is explored. Two methods, PBE0 and CAM-B3LYP, are evaluated both in the gas phase and in solution. Spectra for several of the most toxic PAHs are predicted and compared to experiment, including three isomers of C24H14 and a PAH containing heteroatoms. Unusually high-resolution experimental spectra are obtained for comparison by analyzing each PAH at 4.2 K in an n-alkane matrix. All theoretical spectra visually conform to the profiles of the experimental data but are systematically offset by a small amount. Specifically, when solvent is included the PBE0 functional overestimates peaks by 16.1 ± 6.6 nm while CAM-B3LYP underestimates the same transitions by 14.5 ± 7.6 nm. These calculated spectra can be empirically corrected to decrease the uncertainties to 6.5 ± 5.1 and 5.7 ± 5.1 nm for the PBE0 and CAM-B3LYP methods, respectively. A comparison of computed spectra in the gas phase indicates that the inclusion of n-octane shifts peaks by +11 nm on average and this change is roughly equivalent for PBE0 and CAM-B3LYP. An automated approach for comparing spectra is also described that minimizes residuals between a given theoretical spectrum and all available experimental spectra. This approach identifies the correct spectrum in all cases and excludes approximately 80% of the incorrect spectra, demonstrating that an automated search of theoretical libraries of spectra may eventually become feasible.
Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L
2016-01-01
Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples.
Accurate and Robust Genomic Prediction of Celiac Disease Using Statistical Learning
Abraham, Gad; Tye-Din, Jason A.; Bhalala, Oneil G.; Kowalczyk, Adam; Zobel, Justin; Inouye, Michael
2014-01-01
Practical application of genomic-based risk stratification to clinical diagnosis is appealing yet performance varies widely depending on the disease and genomic risk score (GRS) method. Celiac disease (CD), a common immune-mediated illness, is strongly genetically determined and requires specific HLA haplotypes. HLA testing can exclude diagnosis but has low specificity, providing little information suitable for clinical risk stratification. Using six European cohorts, we provide a proof-of-concept that statistical learning approaches which simultaneously model all SNPs can generate robust and highly accurate predictive models of CD based on genome-wide SNP profiles. The high predictive capacity replicated both in cross-validation within each cohort (AUC of 0.87–0.89) and in independent replication across cohorts (AUC of 0.86–0.9), despite differences in ethnicity. The models explained 30–35% of disease variance and up to ∼43% of heritability. The GRS's utility was assessed in different clinically relevant settings. Comparable to HLA typing, the GRS can be used to identify individuals without CD with ≥99.6% negative predictive value however, unlike HLA typing, fine-scale stratification of individuals into categories of higher-risk for CD can identify those that would benefit from more invasive and costly definitive testing. The GRS is flexible and its performance can be adapted to the clinical situation by adjusting the threshold cut-off. Despite explaining a minority of disease heritability, our findings indicate a genomic risk score provides clinically relevant information to improve upon current diagnostic pathways for CD and support further studies evaluating the clinical utility of this approach in CD and other complex diseases. PMID:24550740
Carney, Paul R.; Myers, Stephen; Geyer, James D.
2011-01-01
Epilepsy, one of the most common neurological diseases, affects over 50 million people worldwide. Epilepsy can have a broad spectrum of debilitating medical and social consequences. Although antiepileptic drugs have helped treat millions of patients, roughly a third of all patients have seizures that are refractory to pharmacological intervention. The evolution of our understanding of this dynamic disease leads to new treatment possibilities. There is great interest in the development of devices that incorporate algorithms capable of detecting early onset of seizures or even predicting them hours before they occur. The lead time provided by these new technologies will allow for new types of interventional treatment. In the near future, seizures may be detected and aborted before physical manifestations begin. In this chapter we discuss the algorithms that make these devices possible and how they have been implemented to date. We also compare and contrast these measures, and review their individual strengths and weaknesses. Finally, we illustrate how these techniques can be combined in a closed-loop seizure prevention system. PMID:22078526
Accurately predicting copper interconnect topographies in foundry design for manufacturability flows
NASA Astrophysics Data System (ADS)
Lu, Daniel; Fan, Zhong; Tak, Ki Duk; Chang, Li-Fu; Zou, Elain; Jiang, Jenny; Yang, Josh; Zhuang, Linda; Chen, Kuang Han; Hurat, Philippe; Ding, Hua
2011-04-01
This paper presents a model-based Chemical Mechanical Polishing (CMP) Design for Manufacturability (DFM) () methodology that includes an accurate prediction of post-CMP copper interconnect topographies at the advanced process technology nodes. Using procedures of extensive model calibration and validation, the CMP process model accurately predicts post-CMP dimensions, such as erosion, dishing, and copper thickness with excellent correlation to silicon measurements. This methodology provides an efficient DFM flow to detect and fix physical manufacturing hotspots related to copper pooling and Depth of Focus (DOF) failures at both block- and full chip level designs. Moreover, the predicted thickness output is used in the CMP-aware RC extraction and Timing analysis flows for better understanding of performance yield and timing impact. In addition, the CMP model can be applied to the verification of model-based dummy fill flows.
Cluster abundance in chameleon f(R) gravity I: toward an accurate halo mass function prediction
NASA Astrophysics Data System (ADS)
Cataneo, Matteo; Rapetti, David; Lombriser, Lucas; Li, Baojiu
2016-12-01
We refine the mass and environment dependent spherical collapse model of chameleon f(R) gravity by calibrating a phenomenological correction inspired by the parameterized post-Friedmann framework against high-resolution N-body simulations. We employ our method to predict the corresponding modified halo mass function, and provide fitting formulas to calculate the enhancement of the f(R) halo abundance with respect to that of General Relativity (GR) within a precision of lesssim 5% from the results obtained in the simulations. Similar accuracy can be achieved for the full f(R) mass function on the condition that the modeling of the reference GR abundance of halos is accurate at the percent level. We use our fits to forecast constraints on the additional scalar degree of freedom of the theory, finding that upper bounds competitive with current Solar System tests are within reach of cluster number count analyses from ongoing and upcoming surveys at much larger scales. Importantly, the flexibility of our method allows also for this to be applied to other scalar-tensor theories characterized by a mass and environment dependent spherical collapse.
Cas9-chromatin binding information enables more accurate CRISPR off-target prediction
Singh, Ritambhara; Kuscu, Cem; Quinlan, Aaron; Qi, Yanjun; Adli, Mazhar
2015-01-01
The CRISPR system has become a powerful biological tool with a wide range of applications. However, improving targeting specificity and accurately predicting potential off-targets remains a significant goal. Here, we introduce a web-based CRISPR/Cas9 Off-target Prediction and Identification Tool (CROP-IT) that performs improved off-target binding and cleavage site predictions. Unlike existing prediction programs that solely use DNA sequence information; CROP-IT integrates whole genome level biological information from existing Cas9 binding and cleavage data sets. Utilizing whole-genome chromatin state information from 125 human cell types further enhances its computational prediction power. Comparative analyses on experimentally validated datasets show that CROP-IT outperforms existing computational algorithms in predicting both Cas9 binding as well as cleavage sites. With a user-friendly web-interface, CROP-IT outputs scored and ranked list of potential off-targets that enables improved guide RNA design and more accurate prediction of Cas9 binding or cleavage sites. PMID:26032770
Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L
2016-08-01
Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises.
Method and apparatus for accurately manipulating an object during microelectrophoresis
Parvin, Bahram A.; Maestre, Marcos F.; Fish, Richard H.; Johnston, William E.
1997-01-01
An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations add reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage.
Method and apparatus for accurately manipulating an object during microelectrophoresis
Parvin, B.A.; Maestre, M.F.; Fish, R.H.; Johnston, W.E.
1997-09-23
An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations and reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage. 11 figs.
Victora, Andrea; Möller, Heiko M.; Exner, Thomas E.
2014-01-01
NMR chemical shift predictions based on empirical methods are nowadays indispensable tools during resonance assignment and 3D structure calculation of proteins. However, owing to the very limited statistical data basis, such methods are still in their infancy in the field of nucleic acids, especially when non-canonical structures and nucleic acid complexes are considered. Here, we present an ab initio approach for predicting proton chemical shifts of arbitrary nucleic acid structures based on state-of-the-art fragment-based quantum chemical calculations. We tested our prediction method on a diverse set of nucleic acid structures including double-stranded DNA, hairpins, DNA/protein complexes and chemically-modified DNA. Overall, our quantum chemical calculations yield highly/very accurate predictions with mean absolute deviations of 0.3–0.6 ppm and correlation coefficients (r2) usually above 0.9. This will allow for identifying misassignments and validating 3D structures. Furthermore, our calculations reveal that chemical shifts of protons involved in hydrogen bonding are predicted significantly less accurately. This is in part caused by insufficient inclusion of solvation effects. However, it also points toward shortcomings of current force fields used for structure determination of nucleic acids. Our quantum chemical calculations could therefore provide input for force field optimization. PMID:25404135
Accurate prediction of band gaps and optical properties of HfO2
NASA Astrophysics Data System (ADS)
Ondračka, Pavel; Holec, David; Nečas, David; Zajíčková, Lenka
2016-10-01
We report on optical properties of various polymorphs of hafnia predicted within the framework of density functional theory. The full potential linearised augmented plane wave method was employed together with the Tran-Blaha modified Becke-Johnson potential (TB-mBJ) for exchange and local density approximation for correlation. Unit cells of monoclinic, cubic and tetragonal crystalline, and a simulated annealing-based model of amorphous hafnia were fully relaxed with respect to internal positions and lattice parameters. Electronic structures and band gaps for monoclinic, cubic, tetragonal and amorphous hafnia were calculated using three different TB-mBJ parametrisations and the results were critically compared with the available experimental and theoretical reports. Conceptual differences between a straightforward comparison of experimental measurements to a calculated band gap on the one hand and to a whole electronic structure (density of electronic states) on the other hand, were pointed out, suggesting the latter should be used whenever possible. Finally, dielectric functions were calculated at two levels, using the random phase approximation without local field effects and with a more accurate Bethe-Salpether equation (BSE) to account for excitonic effects. We conclude that a satisfactory agreement with experimental data for HfO2 was obtained only in the latter case.
Wallace, Jason A; Wang, Yuhang; Shi, Chuanyin; Pastoor, Kevin J; Nguyen, Bao-Linh; Xia, Kai; Shen, Jana K
2011-12-01
Proton uptake or release controls many important biological processes, such as energy transduction, virus replication, and catalysis. Accurate pK(a) prediction informs about proton pathways, thereby revealing detailed acid-base mechanisms. Physics-based methods in the framework of molecular dynamics simulations not only offer pK(a) predictions but also inform about the physical origins of pK(a) shifts and provide details of ionization-induced conformational relaxation and large-scale transitions. One such method is the recently developed continuous constant pH molecular dynamics (CPHMD) method, which has been shown to be an accurate and robust pK(a) prediction tool for naturally occurring titratable residues. To further examine the accuracy and limitations of CPHMD, we blindly predicted the pK(a) values for 87 titratable residues introduced in various hydrophobic regions of staphylococcal nuclease and variants. The predictions gave a root-mean-square deviation of 1.69 pK units from experiment, and there were only two pK(a)'s with errors greater than 3.5 pK units. Analysis of the conformational fluctuation of titrating side-chains in the context of the errors of calculated pK(a) values indicate that explicit treatment of conformational flexibility and the associated dielectric relaxation gives CPHMD a distinct advantage. Analysis of the sources of errors suggests that more accurate pK(a) predictions can be obtained for the most deeply buried residues by improving the accuracy in calculating desolvation energies. Furthermore, it is found that the generalized Born implicit-solvent model underlying the current CPHMD implementation slightly distorts the local conformational environment such that the inclusion of an explicit-solvent representation may offer improvement of accuracy.
A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction
NASA Technical Reports Server (NTRS)
Bockelie, Michael J.; Eiseman, Peter R.
1990-01-01
A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.
Wong, Florence; O’Leary, Jacqueline G; Reddy, K Rajender; Patton, Heather; Kamath, Patrick S; Fallon, Michael B; Garcia-Tsao, Guadalupe; Subramanian, Ram M.; Malik, Raza; Maliakkal, Benedict; Thacker, Leroy R; Bajaj, Jasmohan S
2015-01-01
Background & Aims A consensus conference proposed that cirrhosis-associated acute kidney injury (AKI) be defined as an increase in serum creatinine by >50% from the stable baseline value in <6 months or by ≥0.3mg/dL in <48 hrs. We prospectively evaluated the ability of these criteria to predict mortality within 30 days among hospitalized patients with cirrhosis and infection. Methods 337 patients with cirrhosis admitted with or developed an infection in hospital (56% men; 56±10 y old; model for end-stage liver disease score, 20±8) were followed. We compared data on 30-day mortality, hospital length-of-stay, and organ failure between patients with and without AKI. Results 166 (49%) developed AKI during hospitalization, based on the consensus criteria. Patients who developed AKI had higher admission Child-Pugh (11.0±2.1 vs 9.6±2.1; P<.0001), and MELD scores (23±8 vs17±7; P<.0001), and lower mean arterial pressure (81±16mmHg vs 85±15mmHg; P<.01) than those who did not. Also higher amongst patients with AKI were mortality in ≤30 days (34% vs 7%), intensive care unit transfer (46% vs 20%), ventilation requirement (27% vs 6%), and shock (31% vs 8%); AKI patients also had longer hospital stays (17.8±19.8 days vs 13.3±31.8 days) (all P<.001). 56% of AKI episodes were transient, 28% persistent, and 16% resulted in dialysis. Mortality was 80% among those without renal recovery, higher compared to partial (40%) or complete recovery (15%), or AKI-free patients (7%; P<.0001). Conclusions 30-day mortality is 10-fold higher among infected hospitalized cirrhotic patients with irreversible AKI than those without AKI. The consensus definition of AKI accurately predicts 30-day mortality, length of hospital stay, and organ failure. PMID:23999172
Accurate prediction of V1 location from cortical folds in a surface coordinate system
Hinds, Oliver P.; Rajendran, Niranjini; Polimeni, Jonathan R.; Augustinack, Jean C.; Wiggins, Graham; Wald, Lawrence L.; Rosas, H. Diana; Potthast, Andreas; Schwartz, Eric L.; Fischl, Bruce
2008-01-01
Previous studies demonstrated substantial variability of the location of primary visual cortex (V1) in stereotaxic coordinates when linear volume-based registration is used to match volumetric image intensities (Amunts et al., 2000). However, other qualitative reports of V1 location (Smith, 1904; Stensaas et al., 1974; Rademacher et al., 1993) suggested a consistent relationship between V1 and the surrounding cortical folds. Here, the relationship between folds and the location of V1 is quantified using surface-based analysis to generate a probabilistic atlas of human V1. High-resolution (about 200 μm) magnetic resonance imaging (MRI) at 7 T of ex vivo human cerebral hemispheres allowed identification of the full area via the stria of Gennari: a myeloarchitectonic feature specific to V1. Separate, whole-brain scans were acquired using MRI at 1.5 T to allow segmentation and mesh reconstruction of the cortical gray matter. For each individual, V1 was manually identified in the high-resolution volume and projected onto the cortical surface. Surface-based intersubject registration (Fischl et al., 1999b) was performed to align the primary cortical folds of individual hemispheres to those of a reference template representing the average folding pattern. An atlas of V1 location was constructed by computing the probability of V1 inclusion for each cortical location in the template space. This probabilistic atlas of V1 exhibits low prediction error compared to previous V1 probabilistic atlases built in volumetric coordinates. The increased predictability observed under surface-based registration suggests that the location of V1 is more accurately predicted by the cortical folds than by the shape of the brain embedded in the volume of the skull. In addition, the high quality of this atlas provides direct evidence that surface-based intersubject registration methods are superior to volume-based methods at superimposing functional areas of cortex, and therefore are better
Hash: a Program to Accurately Predict Protein Hα Shifts from Neighboring Backbone Shifts3
Zeng, Jianyang; Zhou, Pei; Donald, Bruce Randall
2012-01-01
Chemical shifts provide not only peak identities for analyzing NMR data, but also an important source of conformational information for studying protein structures. Current structural studies requiring Hα chemical shifts suffer from the following limitations. (1) For large proteins, the Hα chemical shifts can be difficult to assign using conventional NMR triple-resonance experiments, mainly due to the fast transverse relaxation rate of Cα that restricts the signal sensitivity. (2) Previous chemical shift prediction approaches either require homologous models with high sequence similarity or rely heavily on accurate backbone and side-chain structural coordinates. When neither sequence homologues nor structural coordinates are available, we must resort to other information to predict Hα chemical shifts. Predicting accurate Hα chemical shifts using other obtainable information, such as the chemical shifts of nearby backbone atoms (i.e., adjacent atoms in the sequence), can remedy the above dilemmas, and hence advance NMR-based structural studies of proteins. By specifically exploiting the dependencies on chemical shifts of nearby backbone atoms, we propose a novel machine learning algorithm, called Hash, to predict Hα chemical shifts. Hash combines a new fragment-based chemical shift search approach with a non-parametric regression model, called the generalized additive model, to effectively solve the prediction problem. We demonstrate that the chemical shifts of nearby backbone atoms provide a reliable source of information for predicting accurate Hα chemical shifts. Our testing results on different possible combinations of input data indicate that Hash has a wide rage of potential NMR applications in structural and biological studies of proteins. PMID:23242797
Can phenological models predict tree phenology accurately under climate change conditions?
NASA Astrophysics Data System (ADS)
Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry
2014-05-01
The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay
NASA Astrophysics Data System (ADS)
Jin, Xuhon; Huang, Fei; Hu, Pengju; Cheng, Xiaoli
2016-11-01
A fundamental prerequisite for satellites operating in a Low Earth Orbit (LEO) is the availability of fast and accurate prediction of non-gravitational aerodynamic forces, which is characterised by the free molecular flow regime. However, conventional computational methods like the analytical integral method and direct simulation Monte Carlo (DSMC) technique are found failing to deal with flow shadowing and multiple reflections or computationally expensive. This work develops a general computer program for the accurate calculation of aerodynamic forces in the free molecular flow regime using the test particle Monte Carlo (TPMC) method, and non-gravitational aerodynamic forces actiong on the Gravity field and steady-state Ocean Circulation Explorer (GOCE) satellite is calculated for different freestream conditions and gas-surface interaction models by the computer program.
Accurate similarity index based on activity and connectivity of node for link prediction
NASA Astrophysics Data System (ADS)
Li, Longjie; Qian, Lvjian; Wang, Xiaoping; Luo, Shishun; Chen, Xiaoyun
2015-05-01
Recent years have witnessed the increasing of available network data; however, much of those data is incomplete. Link prediction, which can find the missing links of a network, plays an important role in the research and analysis of complex networks. Based on the assumption that two unconnected nodes which are highly similar are very likely to have an interaction, most of the existing algorithms solve the link prediction problem by computing nodes' similarities. The fundamental requirement of those algorithms is accurate and effective similarity indices. In this paper, we propose a new similarity index, namely similarity based on activity and connectivity (SAC), which performs link prediction more accurately. To compute the similarity between two nodes, this index employs the average activity of these two nodes in their common neighborhood and the connectivities between them and their common neighbors. The higher the average activity is and the stronger the connectivities are, the more similar the two nodes are. The proposed index not only commendably distinguishes the contributions of paths but also incorporates the influence of endpoints. Therefore, it can achieve a better predicting result. To verify the performance of SAC, we conduct experiments on 10 real-world networks. Experimental results demonstrate that SAC outperforms the compared baselines.
Method accurately measures mean particle diameters of monodisperse polystyrene latexes
NASA Technical Reports Server (NTRS)
Kubitschek, H. E.
1967-01-01
Photomicrographic method determines mean particle diameters of monodisperse polystyrene latexes. Many diameters are measured simultaneously by measuring row lengths of particles in a triangular array at a glass-oil interface. The method provides size standards for electronic particle counters and prevents distortions, softening, and flattening.
Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.
2014-01-28
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.
NASA Astrophysics Data System (ADS)
Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.
2014-01-01
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.
Express method of construction of accurate inverse pole figures
NASA Astrophysics Data System (ADS)
Perlovich, Yu; Isaenkova, M.; Fesenko, V.
2016-04-01
With regard to metallic materials with the FCC and BCC crystal lattice a new method for constructing the X-ray texture inverse pole figures (IPF) by using tilt curves of spinning sample, characterized by high accuracy and rapidity (express), was proposed. In contrast to the currently widespread method to construct IPF using orientation distribution function (ODF), synthesized in several partial direct pole figures, the proposed method is based on a simple geometrical interpretation of a measurement procedure, requires a minimal operating time of the X-ray diffractometer.
Planar Near-Field Phase Retrieval Using GPUs for Accurate THz Far-Field Prediction
NASA Astrophysics Data System (ADS)
Junkin, Gary
2013-04-01
With a view to using Phase Retrieval to accurately predict Terahertz antenna far-field from near-field intensity measurements, this paper reports on three fundamental advances that achieve very low algorithmic error penalties. The first is a new Gaussian beam analysis that provides accurate initial complex aperture estimates including defocus and astigmatic phase errors, based only on first and second moment calculations. The second is a powerful noise tolerant near-field Phase Retrieval algorithm that combines Anderson's Plane-to-Plane (PTP) with Fienup's Hybrid-Input-Output (HIO) and Successive Over-Relaxation (SOR) to achieve increased accuracy at reduced scan separations. The third advance employs teraflop Graphical Processing Units (GPUs) to achieve practically real time near-field phase retrieval and to obtain the optimum aperture constraint without any a priori information.
The chain collocation method: A spectrally accurate calculus of forms
NASA Astrophysics Data System (ADS)
Rufat, Dzhelil; Mason, Gemma; Mullen, Patrick; Desbrun, Mathieu
2014-01-01
Preserving in the discrete realm the underlying geometric, topological, and algebraic structures at stake in partial differential equations has proven to be a fruitful guiding principle for numerical methods in a variety of fields such as elasticity, electromagnetism, or fluid mechanics. However, structure-preserving methods have traditionally used spaces of piecewise polynomial basis functions for differential forms. Yet, in many problems where solutions are smoothly varying in space, a spectral numerical treatment is called for. In an effort to provide structure-preserving numerical tools with spectral accuracy on logically rectangular grids over periodic or bounded domains, we present a spectral extension of the discrete exterior calculus (DEC), with resulting computational tools extending well-known collocation-based spectral methods. Its efficient implementation using fast Fourier transforms is provided as well.
An accurate fuzzy edge detection method using wavelet details subimages
NASA Astrophysics Data System (ADS)
Sedaghat, Nafiseh; Pourreza, Hamidreza
2010-02-01
Edge detection is a basic and important subject in computer vision and image processing. An edge detector is defined as a mathematical operator of small spatial extent that responds in some way to these discontinuities, usually classifying every image pixel as either belonging to an edge or not. Many researchers have been spent attempting to develop effective edge detection algorithms. Despite this extensive research, the task of finding the edges that correspond to true physical boundaries remains a difficult problem.Edge detection algorithms based on the application of human knowledge show their flexibility and suggest that the use of human knowledge is a reasonable alternative. In this paper we propose a fuzzy inference system with two inputs: gradient and wavelet details. First input is calculated by Sobel operator and the second is calculated by wavelet transform of input image and then reconstruction of image only with details subimages by inverse wavelet transform. There are many fuzzy edge detection methods, but none of them utilize wavelet transform as it is used in this paper. For evaluating our method, we detect edges of images with different brightness characteristics and compare results with canny edge detector. The results show the high performance of our method in finding true edges.
Notas, George; Bariotakis, Michail; Kalogrias, Vaios; Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias
2015-01-01
Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions.
Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias
2015-01-01
Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions. PMID:25794106
Accurate verification of the conserved-vector-current and standard-model predictions
Sirlin, A.; Zucchini, R.
1986-10-20
An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Railkar, Sudhir B.
1988-01-01
This paper represents an attempt to apply extensions of a hybrid transfinite element computational approach for accurately predicting thermoelastic stress waves. The applicability of the present formulations for capturing the thermal stress waves induced by boundary heating for the well known Danilovskaya problems is demonstrated. A unique feature of the proposed formulations for applicability to the Danilovskaya problem of thermal stress waves in elastic solids lies in the hybrid nature of the unified formulations and the development of special purpose transfinite elements in conjunction with the classical Galerkin techniques and transformation concepts. Numerical test cases validate the applicability and superior capability to capture the thermal stress waves induced due to boundary heating.
The MIDAS touch for Accurately Predicting the Stress-Strain Behavior of Tantalum
Jorgensen, S.
2016-03-02
Testing the behavior of metals in extreme environments is not always feasible, so material scientists use models to try and predict the behavior. To achieve accurate results it is necessary to use the appropriate model and material-specific parameters. This research evaluated the performance of six material models available in the MIDAS database [1] to determine at which temperatures and strain-rates they perform best, and to determine to which experimental data their parameters were optimized. Additionally, parameters were optimized for the Johnson-Cook model using experimental data from Lassila et al [2].
Mapping methods for computationally efficient and accurate structural reliability
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1991-01-01
The influence of mesh coarseness in the structural reliability is evaluated. The objectives are to describe the alternatives and to demonstrate their effectiveness. The results show that special mapping methods can be developed by using: (1) deterministic structural responses from a fine (convergent) finite element mesh; (2) probabilistic distributions of structural responses from a coarse finite element mesh; (3) the relationship between the probabilistic structural responses from the coarse and fine finite element meshes; and (4) probabilistic mapping. The structural responses from different finite element meshes are highly correlated.
Pendant bubble method for an accurate characterization of superhydrophobic surfaces.
Ling, William Yeong Liang; Ng, Tuck Wah; Neild, Adrian
2011-12-06
The commonly used sessile drop method for measuring contact angles and surface tension suffers from errors on superhydrophobic surfaces. This occurs from unavoidable experimental error in determining the vertical location of the liquid-solid-vapor interface due to a camera's finite pixel resolution, thereby necessitating the development and application of subpixel algorithms. We demonstrate here the advantage of a pendant bubble in decreasing the resulting error prior to the application of additional algorithms. For sessile drops to attain an equivalent accuracy, the pixel count would have to be increased by 2 orders of magnitude.
Accurate prediction of human drug toxicity: a major challenge in drug development.
Li, Albert P
2004-11-01
Over the past decades, a number of drugs have been withdrawn or have required special labeling due to adverse effects observed post-marketing. Species differences in drug toxicity in preclinical safety tests and the lack of sensitive biomarkers and nonrepresentative patient population in clinical trials are probable reasons for the failures in predicting human drug toxicity. It is proposed that toxicology should evolve from an empirical practice to an investigative discipline. Accurate prediction of human drug toxicity requires resources and time to be spent in clearly defining key toxic pathways and corresponding risk factors, which hopefully, will be compensated by the benefits of a lower percentage of clinical failure due to toxicity and a decreased frequency of market withdrawal due to unacceptable adverse drug effects.
Individualizing amikacin regimens: accurate method to achieve therapeutic concentrations.
Zaske, D E; Cipolle, R J; Rotschafer, J C; Kohls, P R; Strate, R G
1991-11-01
Amikacin's pharmacokinetics and dosage requirements were studied in 98 patients receiving treatment for gram-negative infections. A wide interpatient variation in the kinetic parameters of the drug occurred in all patients and in patients who had normal serum creatinine levels or normal creatinine clearance. The half-life ranged from 0.7 to 14.4 h in 74 patients who had normal serum creatinine levels and from 0.7 to 7.2 h in 37 patients who had normal creatinine clearance. The necessary daily dose to obtain therapeutic serum concentrations ranged from 1.25 to 57 mg/kg in patients with normal serum creatinine levels and from 10 to 57 mg/kg in patients with normal creatinine clearance. In four patients (4%), a significant change in baseline serum creatinine level (greater than 0.5 mg/dl) occurred during or after treatment, which may have been amikacin-associated toxicity. Overt ototoxicity occurred in one patient. The method of individualizing dosage regimens provided a clinically useful means of rapidly attaining therapeutic peak and trough serum concentrations.
IDSite: An accurate approach to predict P450-mediated drug metabolism
Li, Jianing; Schneebeli, Severin T.; Bylund, Joseph; Farid, Ramy; Friesner, Richard A.
2011-01-01
Accurate prediction of drug metabolism is crucial for drug design. Since a large majority of drugs metabolism involves P450 enzymes, we herein describe a computational approach, IDSite, to predict P450-mediated drug metabolism. To model induced-fit effects, IDSite samples the conformational space with flexible docking in Glide followed by two refinement stages using the Protein Local Optimization Program (PLOP). Sites of metabolism (SOMs) are predicted according to a physical-based score that evaluates the potential of atoms to react with the catalytic iron center. As a preliminary test, we present in this paper the prediction of hydroxylation and O-dealkylation sites mediated by CYP2D6 using two different models: a physical-based simulation model, and a modification of this model in which a small number of parameters are fit to a training set. Without fitting any parameters to experimental data, the Physical IDSite scoring recovers 83% of the experimental observations for 56 compounds with a very low false positive rate. With only 4 fitted parameters, the Fitted IDSite was trained with the subset of 36 compounds and successfully applied to the other 20 compounds, recovering 94% of the experimental observations with high sensitivity and specificity for both sets. PMID:22247702
Mass spectrometry methods for predicting antibiotic resistance.
Charretier, Yannick; Schrenzel, Jacques
2016-10-01
Developing elaborate techniques for clinical applications can be a complicated process. Whole-cell MALDI-TOF MS revolutionized reliable microorganism identification in clinical microbiology laboratories and is now replacing phenotypic microbial identification. This technique is a generic, accurate, rapid, and cost-effective growth-based method. Antibiotic resistance keeps emerging in environmental and clinical microorganisms, leading to clinical therapeutic challenges, especially for Gram-negative bacteria. Antimicrobial susceptibility testing is used to reliably predict antimicrobial success in treating infection, but it is inherently limited by the need to isolate and grow cultures, delaying the application of appropriate therapies. Antibiotic resistance prediction by growth-independent methods is expected to reduce the turnaround time. Recently, the potential of next-generation sequencing and microarrays in predicting microbial resistance has been demonstrated, and this review evaluates the potential of MS in this field. First, technological advances are described, and the possibility of predicting antibiotic resistance by MS is then illustrated for three prototypical human pathogens: Staphylococcus aureus, Escherichia coli, and Pseudomonas aeruginosa. Clearly, MS methods can identify antimicrobial resistance mediated by horizontal gene transfers or by mutations that affect the quantity of a gene product, whereas antimicrobial resistance mediated by target mutations remains difficult to detect.
Predicting abrasive wear with coupled Lagrangian methods
NASA Astrophysics Data System (ADS)
Beck, Florian; Eberhard, Peter
2015-05-01
In this paper, a mesh-less approach for the simulation of a fluid with particle loading and the prediction of abrasive wear is presented. We are using the smoothed particle hydrodynamics (SPH) method for modeling the fluid and the discrete element method (DEM) for the solid particles, which represent the loading of the fluid. These Lagrangian methods are used to describe heavily sloshing fluids with their free surfaces as well as the interface between the fluid and the solid particles accurately. A Reynolds-averaged Navier-Stokes equations model is applied for handling turbulences. We are predicting abrasive wear on the boundary geometry with two different wear models taking cutting and deformation mechanisms into account. The boundary geometry is discretized with special DEM particles. In doing so, it is possible to use the same particle type for both the calculation of the boundary conditions for the SPH method as well as the DEM and for predicting the abrasive wear. After a brief introduction to the SPH method and the DEM, the handling of the boundary and the coupling of the fluid and the solid particles are discussed. Then, the applied wear models are presented and the simulation scenarios are described. The first numerical experiment is the simulation of a fluid with loading which is sloshing inside a tank. The second numerical experiment is the simulation of the impact of a free jet with loading to a simplified pelton bucket. We are especially investigating the wear patterns inside the tank and the bucket.
Intermolecular potentials and the accurate prediction of the thermodynamic properties of water
NASA Astrophysics Data System (ADS)
Shvab, I.; Sadus, Richard J.
2013-11-01
The ability of intermolecular potentials to correctly predict the thermodynamic properties of liquid water at a density of 0.998 g/cm3 for a wide range of temperatures (298-650 K) and pressures (0.1-700 MPa) is investigated. Molecular dynamics simulations are reported for the pressure, thermal pressure coefficient, thermal expansion coefficient, isothermal and adiabatic compressibilities, isobaric and isochoric heat capacities, and Joule-Thomson coefficient of liquid water using the non-polarizable SPC/E and TIP4P/2005 potentials. The results are compared with both experiment data and results obtained from the ab initio-based Matsuoka-Clementi-Yoshimine non-additive (MCYna) [J. Li, Z. Zhou, and R. J. Sadus, J. Chem. Phys. 127, 154509 (2007)] potential, which includes polarization contributions. The data clearly indicate that both the SPC/E and TIP4P/2005 potentials are only in qualitative agreement with experiment, whereas the polarizable MCYna potential predicts some properties within experimental uncertainty. This highlights the importance of polarizability for the accurate prediction of the thermodynamic properties of water, particularly at temperatures beyond 298 K.
Intermolecular potentials and the accurate prediction of the thermodynamic properties of water.
Shvab, I; Sadus, Richard J
2013-11-21
The ability of intermolecular potentials to correctly predict the thermodynamic properties of liquid water at a density of 0.998 g∕cm(3) for a wide range of temperatures (298-650 K) and pressures (0.1-700 MPa) is investigated. Molecular dynamics simulations are reported for the pressure, thermal pressure coefficient, thermal expansion coefficient, isothermal and adiabatic compressibilities, isobaric and isochoric heat capacities, and Joule-Thomson coefficient of liquid water using the non-polarizable SPC∕E and TIP4P∕2005 potentials. The results are compared with both experiment data and results obtained from the ab initio-based Matsuoka-Clementi-Yoshimine non-additive (MCYna) [J. Li, Z. Zhou, and R. J. Sadus, J. Chem. Phys. 127, 154509 (2007)] potential, which includes polarization contributions. The data clearly indicate that both the SPC∕E and TIP4P∕2005 potentials are only in qualitative agreement with experiment, whereas the polarizable MCYna potential predicts some properties within experimental uncertainty. This highlights the importance of polarizability for the accurate prediction of the thermodynamic properties of water, particularly at temperatures beyond 298 K.
Intermolecular potentials and the accurate prediction of the thermodynamic properties of water
Shvab, I.; Sadus, Richard J.
2013-11-21
The ability of intermolecular potentials to correctly predict the thermodynamic properties of liquid water at a density of 0.998 g/cm{sup 3} for a wide range of temperatures (298–650 K) and pressures (0.1–700 MPa) is investigated. Molecular dynamics simulations are reported for the pressure, thermal pressure coefficient, thermal expansion coefficient, isothermal and adiabatic compressibilities, isobaric and isochoric heat capacities, and Joule-Thomson coefficient of liquid water using the non-polarizable SPC/E and TIP4P/2005 potentials. The results are compared with both experiment data and results obtained from the ab initio-based Matsuoka-Clementi-Yoshimine non-additive (MCYna) [J. Li, Z. Zhou, and R. J. Sadus, J. Chem. Phys. 127, 154509 (2007)] potential, which includes polarization contributions. The data clearly indicate that both the SPC/E and TIP4P/2005 potentials are only in qualitative agreement with experiment, whereas the polarizable MCYna potential predicts some properties within experimental uncertainty. This highlights the importance of polarizability for the accurate prediction of the thermodynamic properties of water, particularly at temperatures beyond 298 K.
Deformation, Failure, and Fatigue Life of SiC/Ti-15-3 Laminates Accurately Predicted by MAC/GMC
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2002-01-01
NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) (ref.1) has been extended to enable fully coupled macro-micro deformation, failure, and fatigue life predictions for advanced metal matrix, ceramic matrix, and polymer matrix composites. Because of the multiaxial nature of the code's underlying micromechanics model, GMC--which allows the incorporation of complex local inelastic constitutive models--MAC/GMC finds its most important application in metal matrix composites, like the SiC/Ti-15-3 composite examined here. Furthermore, since GMC predicts the microscale fields within each constituent of the composite material, submodels for local effects such as fiber breakage, interfacial debonding, and matrix fatigue damage can and have been built into MAC/GMC. The present application of MAC/GMC highlights the combination of these features, which has enabled the accurate modeling of the deformation, failure, and life of titanium matrix composites.
Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R
2017-02-14
Accurate energy ranking is a key facet to the problem of first-principles crystal-structure prediction (CSP) of molecular crystals. This work presents a systematic assessment of B86bPBE-XDM, a semilocal density functional combined with the exchange-hole dipole moment (XDM) dispersion model, for energy ranking using 14 compounds from the first five CSP blind tests. Specifically, the set of crystals studied comprises 11 rigid, planar compounds and 3 co-crystals. The experimental structure was correctly identified as the lowest in lattice energy for 12 of the 14 total crystals. One of the exceptions is 4-hydroxythiophene-2-carbonitrile, for which the experimental structure was correctly identified once a quasi-harmonic estimate of the vibrational free-energy contribution was included, evidencing the occasional importance of thermal corrections for accurate energy ranking. The other exception is an organic salt, where charge-transfer error (also called delocalization error) is expected to cause the base density functional to be unreliable. Provided the choice of base density functional is appropriate and an estimate of temperature effects is used, XDM-corrected density-functional theory is highly reliable for the energetic ranking of competing crystal structures.
Extracting accurate strain measurements in bone mechanics: A critical review of current methods.
Grassi, Lorenzo; Isaksson, Hanna
2015-10-01
Osteoporosis related fractures are a social burden that advocates for more accurate fracture prediction methods. Mechanistic methods, e.g. finite element models, have been proposed as a tool to better predict bone mechanical behaviour and strength. However, there is little consensus about the optimal constitutive law to describe bone as a material. Extracting reliable and relevant strain data from experimental tests is of fundamental importance to better understand bone mechanical properties, and to validate numerical models. Several techniques have been used to measure strain in experimental mechanics, with substantial differences in terms of accuracy, precision, time- and length-scale. Each technique presents upsides and downsides that must be carefully evaluated when designing the experiment. Moreover, additional complexities are often encountered when applying such strain measurement techniques to bone, due to its complex composite structure. This review of literature examined the four most commonly adopted methods for strain measurements (strain gauges, fibre Bragg grating sensors, digital image correlation, and digital volume correlation), with a focus on studies with bone as a substrate material, at the organ and tissue level. For each of them the working principles, a summary of the main applications to bone mechanics at the organ- and tissue-level, and a list of pros and cons are provided.
Measuring solar reflectance Part I: Defining a metric that accurately predicts solar heat gain
Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul
2010-05-14
Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective 'cool colored' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland U.S. latitudes, this metric RE891BN can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {le} 5:12 [23{sup o}]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool-roof net energy savings by as much as 23%. We define clear-sky air mass one global horizontal ('AM1GH') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer.
Accurate prediction of wall shear stress in a stented artery: newtonian versus non-newtonian models.
Mejia, Juan; Mongrain, Rosaire; Bertrand, Olivier F
2011-07-01
A significant amount of evidence linking wall shear stress to neointimal hyperplasia has been reported in the literature. As a result, numerical and experimental models have been created to study the influence of stent design on wall shear stress. Traditionally, blood has been assumed to behave as a Newtonian fluid, but recently that assumption has been challenged. The use of a linear model; however, can reduce computational cost, and allow the use of Newtonian fluids (e.g., glycerine and water) instead of a blood analog fluid in an experimental setup. Therefore, it is of interest whether a linear model can be used to accurately predict the wall shear stress caused by a non-Newtonian fluid such as blood within a stented arterial segment. The present work compares the resulting wall shear stress obtained using two linear and one nonlinear model under the same flow waveform. All numerical models are fully three-dimensional, transient, and incorporate a realistic stent geometry. It is shown that traditional linear models (based on blood's lowest viscosity limit, 3.5 Pa s) underestimate the wall shear stress within a stented arterial segment, which can lead to an overestimation of the risk of restenosis. The second linear model, which uses a characteristic viscosity (based on an average strain rate, 4.7 Pa s), results in higher wall shear stress levels, but which are still substantially below those of the nonlinear model. It is therefore shown that nonlinear models result in more accurate predictions of wall shear stress within a stented arterial segment.
Point-of-care cardiac troponin test accurately predicts heat stroke severity in rats.
Audet, Gerald N; Quinn, Carrie M; Leon, Lisa R
2015-11-15
Heat stroke (HS) remains a significant public health concern. Despite the substantial threat posed by HS, there is still no field or clinical test of HS severity. We suggested previously that circulating cardiac troponin (cTnI) could serve as a robust biomarker of HS severity after heating. In the present study, we hypothesized that (cTnI) point-of-care test (ctPOC) could be used to predict severity and organ damage at the onset of HS. Conscious male Fischer 344 rats (n = 16) continuously monitored for heart rate (HR), blood pressure (BP), and core temperature (Tc) (radiotelemetry) were heated to maximum Tc (Tc,Max) of 41.9 ± 0.1°C and recovered undisturbed for 24 h at an ambient temperature of 20°C. Blood samples were taken at Tc,Max and 24 h after heat via submandibular bleed and analyzed on ctPOC test. POC cTnI band intensity was ranked using a simple four-point scale via two blinded observers and compared with cTnI levels measured by a clinical blood analyzer. Blood was also analyzed for biomarkers of systemic organ damage. HS severity, as previously defined using HR, BP, and recovery Tc profile during heat exposure, correlated strongly with cTnI (R(2) = 0.69) at Tc,Max. POC cTnI band intensity ranking accurately predicted cTnI levels (R(2) = 0.64) and HS severity (R(2) = 0.83). Five markers of systemic organ damage also correlated with ctPOC score (albumin, alanine aminotransferase, blood urea nitrogen, cholesterol, and total bilirubin; R(2) > 0.4). This suggests that cTnI POC tests can accurately determine HS severity and could serve as simple, portable, cost-effective HS field tests.
Accurate description of the electronic structure of organic semiconductors by GW methods
NASA Astrophysics Data System (ADS)
Marom, Noa
2017-03-01
Electronic properties associated with charged excitations, such as the ionization potential (IP), the electron affinity (EA), and the energy level alignment at interfaces, are critical parameters for the performance of organic electronic devices. To computationally design organic semiconductors and functional interfaces with tailored properties for target applications it is necessary to accurately predict these properties from first principles. Many-body perturbation theory is often used for this purpose within the GW approximation, where G is the one particle Green’s function and W is the dynamically screened Coulomb interaction. Here, the formalism of GW methods at different levels of self-consistency is briefly introduced and some recent applications to organic semiconductors and interfaces are reviewed.
NESmapper: accurate prediction of leucine-rich nuclear export signals using activity-based profiles.
Kosugi, Shunichi; Yanagawa, Hiroshi; Terauchi, Ryohei; Tabata, Satoshi
2014-09-01
The nuclear export of proteins is regulated largely through the exportin/CRM1 pathway, which involves the specific recognition of leucine-rich nuclear export signals (NESs) in the cargo proteins, and modulates nuclear-cytoplasmic protein shuttling by antagonizing the nuclear import activity mediated by importins and the nuclear import signal (NLS). Although the prediction of NESs can help to define proteins that undergo regulated nuclear export, current methods of predicting NESs, including computational tools and consensus-sequence-based searches, have limited accuracy, especially in terms of their specificity. We found that each residue within an NES largely contributes independently and additively to the entire nuclear export activity. We created activity-based profiles of all classes of NESs with a comprehensive mutational analysis in mammalian cells. The profiles highlight a number of specific activity-affecting residues not only at the conserved hydrophobic positions but also in the linker and flanking regions. We then developed a computational tool, NESmapper, to predict NESs by using profiles that had been further optimized by training and combining the amino acid properties of the NES-flanking regions. This tool successfully reduced the considerable number of false positives, and the overall prediction accuracy was higher than that of other methods, including NESsential and Wregex. This profile-based prediction strategy is a reliable way to identify functional protein motifs. NESmapper is available at http://sourceforge.net/projects/nesmapper.
ChIP-seq Accurately Predicts Tissue-Specific Activity of Enhancers
Visel, Axel; Blow, Matthew J.; Li, Zirong; Zhang, Tao; Akiyama, Jennifer A.; Holt, Amy; Plajzer-Frick, Ingrid; Shoukry, Malak; Wright, Crystal; Chen, Feng; Afzal, Veena; Ren, Bing; Rubin, Edward M.; Pennacchio, Len A.
2009-02-01
A major yet unresolved quest in decoding the human genome is the identification of the regulatory sequences that control the spatial and temporal expression of genes. Distant-acting transcriptional enhancers are particularly challenging to uncover since they are scattered amongst the vast non-coding portion of the genome. Evolutionary sequence constraint can facilitate the discovery of enhancers, but fails to predict when and where they are active in vivo. Here, we performed chromatin immunoprecipitation with the enhancer-associated protein p300, followed by massively-parallel sequencing, to map several thousand in vivo binding sites of p300 in mouse embryonic forebrain, midbrain, and limb tissue. We tested 86 of these sequences in a transgenic mouse assay, which in nearly all cases revealed reproducible enhancer activity in those tissues predicted by p300 binding. Our results indicate that in vivo mapping of p300 binding is a highly accurate means for identifying enhancers and their associated activities and suggest that such datasets will be useful to study the role of tissue-specific enhancers in human biology and disease on a genome-wide scale.
Staranowicz, Aaron N; Ray, Christopher; Mariottini, Gian-Luca
2015-01-01
Falls are the most-common causes of unintentional injury and death in older adults. Many clinics, hospitals, and health-care providers are urgently seeking accurate, low-cost, and easy-to-use technology to predict falls before they happen, e.g., by monitoring the human walking pattern (or "gait"). Despite the wide popularity of Microsoft's Kinect and the plethora of solutions for gait monitoring, no strategy has been proposed to date to allow non-expert users to calibrate the cameras, which is essential to accurately fuse the body motion observed by each camera in a single frame of reference. In this paper, we present a novel multi-Kinect calibration algorithm that has advanced features when compared to existing methods: 1) is easy to use, 2) it can be used in any generic Kinect arrangement, and 3) it provides accurate calibration. Extensive real-world experiments have been conducted to validate our algorithm and to compare its performance against other multi-Kinect calibration approaches, especially to show the improved estimate of gait parameters. Finally, a MATLAB Toolbox has been made publicly available for the entire research community.
Energy expenditure during level human walking: seeking a simple and accurate predictive solution.
Ludlow, Lindsay W; Weyand, Peter G
2016-03-01
Accurate prediction of the metabolic energy that walking requires can inform numerous health, bodily status, and fitness outcomes. We adopted a two-step approach to identifying a concise, generalized equation for predicting level human walking metabolism. Using literature-aggregated values we compared 1) the predictive accuracy of three literature equations: American College of Sports Medicine (ACSM), Pandolf et al., and Height-Weight-Speed (HWS); and 2) the goodness-of-fit possible from one- vs. two-component descriptions of walking metabolism. Literature metabolic rate values (n = 127; speed range = 0.4 to 1.9 m/s) were aggregated from 25 subject populations (n = 5-42) whose means spanned a 1.8-fold range of heights and a 4.2-fold range of weights. Population-specific resting metabolic rates (V̇o2 rest) were determined using standardized equations. Our first finding was that the ACSM and Pandolf et al. equations underpredicted nearly all 127 literature-aggregated values. Consequently, their standard errors of estimate (SEE) were nearly four times greater than those of the HWS equation (4.51 and 4.39 vs. 1.13 ml O2·kg(-1)·min(-1), respectively). For our second comparison, empirical best-fit relationships for walking metabolism were derived from the data set in one- and two-component forms for three V̇o2-speed model types: linear (∝V(1.0)), exponential (∝V(2.0)), and exponential/height (∝V(2.0)/Ht). We found that the proportion of variance (R(2)) accounted for, when averaged across the three model types, was substantially lower for one- vs. two-component versions (0.63 ± 0.1 vs. 0.90 ± 0.03) and the predictive errors were nearly twice as great (SEE = 2.22 vs. 1.21 ml O2·kg(-1)·min(-1)). Our final analysis identified the following concise, generalized equation for predicting level human walking metabolism: V̇o2 total = V̇o2 rest + 3.85 + 5.97·V(2)/Ht (where V is measured in m/s, Ht in meters, and V̇o2 in ml O2·kg(-1)·min(-1)).
Asmadi, Aldi; Neumann, Marcus A; Kendrick, John; Girard, Pascale; Perrin, Marc-Antoine; Leusen, Frank J J
2009-12-24
In the 2007 blind test of crystal structure prediction hosted by the Cambridge Crystallographic Data Centre (CCDC), a hybrid DFT/MM method correctly ranked each of the four experimental structures as having the lowest lattice energy of all the crystal structures predicted for each molecule. The work presented here further validates this hybrid method by optimizing the crystal structures (experimental and submitted) of the first three CCDC blind tests held in 1999, 2001, and 2004. Except for the crystal structures of compound IX, all structures were reminimized and ranked according to their lattice energies. The hybrid method computes the lattice energy of a crystal structure as the sum of the DFT total energy and a van der Waals (dispersion) energy correction. Considering all four blind tests, the crystal structure with the lowest lattice energy corresponds to the experimentally observed structure for 12 out of 14 molecules. Moreover, good geometrical agreement is observed between the structures determined by the hybrid method and those measured experimentally. In comparison with the correct submissions made by the blind test participants, all hybrid optimized crystal structures (apart from compound II) have the smallest calculated root mean squared deviations from the experimentally observed structures. It is predicted that a new polymorph of compound V exists under pressure.
Wong, Sharon; Back, Michael; Tan, Poh Wee; Lee, Khai Mun; Baggarley, Shaun; Lu, Jaide Jay
2012-07-01
Skin doses have been an important factor in the dose prescription for breast radiotherapy. Recent advances in radiotherapy treatment techniques, such as intensity-modulated radiation therapy (IMRT) and new treatment schemes such as hypofractionated breast therapy have made the precise determination of the surface dose necessary. Detailed information of the dose at various depths of the skin is also critical in designing new treatment strategies. The purpose of this work was to assess the accuracy of surface dose calculation by a clinically used treatment planning system and those measured by thermoluminescence dosimeters (TLDs) in a customized chest wall phantom. This study involved the construction of a chest wall phantom for skin dose assessment. Seven TLDs were distributed throughout each right chest wall phantom to give adequate representation of measured radiation doses. Point doses from the CMS Xio Registered-Sign treatment planning system (TPS) were calculated for each relevant TLD positions and results correlated. There were no significant difference between measured absorbed dose by TLD and calculated doses by the TPS (p > 0.05 (1-tailed). Dose accuracy of up to 2.21% was found. The deviations from the calculated absorbed doses were overall larger (3.4%) when wedges and bolus were used. 3D radiotherapy TPS is a useful and accurate tool to assess the accuracy of surface dose. Our studies have shown that radiation treatment accuracy expressed as a comparison between calculated doses (by TPS) and measured doses (by TLD dosimetry) can be accurately predicted for tangential treatment of the chest wall after mastectomy.
Polzer, S; Gasser, T C; Novak, K; Man, V; Tichy, M; Skacel, P; Bursa, J
2015-03-01
Structure-based constitutive models might help in exploring mechanisms by which arterial wall histology is linked to wall mechanics. This study aims to validate a recently proposed structure-based constitutive model. Specifically, the model's ability to predict mechanical biaxial response of porcine aortic tissue with predefined collagen structure was tested. Histological slices from porcine thoracic aorta wall (n=9) were automatically processed to quantify the collagen fiber organization, and mechanical testing identified the non-linear properties of the wall samples (n=18) over a wide range of biaxial stretches. Histological and mechanical experimental data were used to identify the model parameters of a recently proposed multi-scale constitutive description for arterial layers. The model predictive capability was tested with respect to interpolation and extrapolation. Collagen in the media was predominantly aligned in circumferential direction (planar von Mises distribution with concentration parameter bM=1.03 ± 0.23), and its coherence decreased gradually from the luminal to the abluminal tissue layers (inner media, b=1.54 ± 0.40; outer media, b=0.72 ± 0.20). In contrast, the collagen in the adventitia was aligned almost isotropically (bA=0.27 ± 0.11), and no features, such as families of coherent fibers, were identified. The applied constitutive model captured the aorta biaxial properties accurately (coefficient of determination R(2)=0.95 ± 0.03) over the entire range of biaxial deformations and with physically meaningful model parameters. Good predictive properties, well outside the parameter identification space, were observed (R(2)=0.92 ± 0.04). Multi-scale constitutive models equipped with realistic micro-histological data can predict macroscopic non-linear aorta wall properties. Collagen largely defines already low strain properties of media, which explains the origin of wall anisotropy seen at this strain level. The structure and mechanical
Robust and Accurate Shock Capturing Method for High-Order Discontinuous Galerkin Methods
NASA Technical Reports Server (NTRS)
Atkins, Harold L.; Pampell, Alyssa
2011-01-01
A simple yet robust and accurate approach for capturing shock waves using a high-order discontinuous Galerkin (DG) method is presented. The method uses the physical viscous terms of the Navier-Stokes equations as suggested by others; however, the proposed formulation of the numerical viscosity is continuous and compact by construction, and does not require the solution of an auxiliary diffusion equation. This work also presents two analyses that guided the formulation of the numerical viscosity and certain aspects of the DG implementation. A local eigenvalue analysis of the DG discretization applied to a shock containing element is used to evaluate the robustness of several Riemann flux functions, and to evaluate algorithm choices that exist within the underlying DG discretization. A second analysis examines exact solutions to the DG discretization in a shock containing element, and identifies a "model" instability that will inevitably arise when solving the Euler equations using the DG method. This analysis identifies the minimum viscosity required for stability. The shock capturing method is demonstrated for high-speed flow over an inviscid cylinder and for an unsteady disturbance in a hypersonic boundary layer. Numerical tests are presented that evaluate several aspects of the shock detection terms. The sensitivity of the results to model parameters is examined with grid and order refinement studies.
Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A
2015-09-18
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).
A high order accurate finite element algorithm for high Reynolds number flow prediction
NASA Technical Reports Server (NTRS)
Baker, A. J.
1978-01-01
A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.
Bakhtiarizadeh, Mohammad Reza; Moradi-Shahrbabak, Mohammad; Ebrahimi, Mansour; Ebrahimie, Esmaeil
2014-09-07
Due to the central roles of lipid binding proteins (LBPs) in many biological processes, sequence based identification of LBPs is of great interest. The major challenge is that LBPs are diverse in sequence, structure, and function which results in low accuracy of sequence homology based methods. Therefore, there is a need for developing alternative functional prediction methods irrespective of sequence similarity. To identify LBPs from non-LBPs, the performances of support vector machine (SVM) and neural network were compared in this study. Comprehensive protein features and various techniques were employed to create datasets. Five-fold cross-validation (CV) and independent evaluation (IE) tests were used to assess the validity of the two methods. The results indicated that SVM outperforms neural network. SVM achieved 89.28% (CV) and 89.55% (IE) overall accuracy in identification of LBPs from non-LBPs and 92.06% (CV) and 92.90% (IE) (in average) for classification of different LBPs classes. Increasing the number and the range of extracted protein features as well as optimization of the SVM parameters significantly increased the efficiency of LBPs class prediction in comparison to the only previous report in this field. Altogether, the results showed that the SVM algorithm can be run on broad, computationally calculated protein features and offers a promising tool in detection of LBPs classes. The proposed approach has the potential to integrate and improve the common sequence alignment based methods.
Gaussian mixture models as flux prediction method for central receivers
NASA Astrophysics Data System (ADS)
Grobler, Annemarie; Gauché, Paul; Smit, Willie
2016-05-01
Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.
A time-accurate finite volume method valid at all flow velocities
NASA Astrophysics Data System (ADS)
Kim, S.-W.
1993-07-01
A finite volume method to solve the Navier-Stokes equations at all flow velocities (e.g., incompressible, subsonic, transonic, supersonic and hypersonic flows) is presented. The numerical method is based on a finite volume method that incorporates a pressure-staggered mesh and an incremental pressure equation for the conservation of mass. Comparison of three generally accepted time-advancing schemes, i.e., Simplified Marker-and-Cell (SMAC), Pressure-Implicit-Splitting of Operators (PISO), and Iterative-Time-Advancing (ITA) scheme, are made by solving a lid-driven polar cavity flow and self-sustained oscillatory flows over circular and square cylinders. Calculated results show that the ITA is the most stable numerically and yields the most accurate results. The SMAC is the most efficient computationally and is as stable as the ITA. It is shown that the PISO is the most weakly convergent and it exhibits an undesirable strong dependence on the time-step size. The degenerated numerical results obtained using the PISO are attributed to its second corrector step that cause the numerical results to deviate further from a divergence free velocity field. The accurate numerical results obtained using the ITA is attributed to its capability to resolve the nonlinearity of the Navier-Stokes equations. The present numerical method that incorporates the ITA is used to solve an unsteady transitional flow over an oscillating airfoil and a chemically reacting flow of hydrogen in a vitiated supersonic airstream. The turbulence fields in these flow cases are described using multiple-time-scale turbulence equations. For the unsteady transitional over an oscillating airfoil, the fluid flow is described using ensemble-averaged Navier-Stokes equations defined on the Lagrangian-Eulerian coordinates. It is shown that the numerical method successfully predicts the large dynamic stall vortex (DSV) and the trailing edge vortex (TEV) that are periodically generated by the oscillating airfoil
Hashmi, Muhammad Ali; Andreassend, Sarah K; Keyzers, Robert A; Lein, Matthias
2016-09-21
Despite advances in electronic structure theory the theoretical prediction of spectroscopic properties remains a computational challenge. This is especially true for natural products that exhibit very large conformational freedom and hence need to be sampled over many different accessible conformations. We report a strategy, which is able to predict NMR chemical shifts and more elusive properties like the optical rotation with great precision, through step-wise incremental increases of the conformational degrees of freedom. The application of this method is demonstrated for 3-epi-xestoaminol C, a chiral natural compound with a long, linear alkyl chain of 14 carbon atoms. Experimental NMR and [α]D values are reported to validate the results of the density functional theory calculations.
Can tritiated water-dilution space accurately predict total body water in chukar partridges
Crum, B.G.; Williams, J.B.; Nagy, K.A.
1985-11-01
Total body water (TBW) volumes determined from the dilution space of injected tritiated water have consistently overestimated actual water volumes (determined by desiccation to constant mass) in reptiles and mammals, but results for birds are controversial. We investigated potential errors in both the dilution method and the desiccation method in an attempt to resolve this controversy. Tritiated water dilution yielded an accurate measurement of water mass in vitro. However, in vivo, this method yielded a 4.6% overestimate of the amount of water (3.1% of live body mass) in chukar partridges, apparently largely because of loss of tritium from body water to sites of dissociable hydrogens on body solids. An additional source of overestimation (approximately 2% of body mass) was loss of tritium to the solids in blood samples during distillation of blood to obtain pure water for tritium analysis. Measuring tritium activity in plasma samples avoided this problem but required measurement of, and correction for, the dry matter content in plasma. Desiccation to constant mass by lyophilization or oven-drying also overestimated the amount of water actually in the bodies of chukar partridges by 1.4% of body mass, because these values included water adsorbed onto the outside of feathers. When desiccating defeathered carcasses, oven-drying at 70 degrees C yielded TBW values identical to those obtained from lyophilization, but TBW was overestimated (0.5% of body mass) by drying at 100 degrees C due to loss of organic substances as well as water.
Accurate Prediction of the Dynamical Changes within the Second PDZ Domain of PTP1e
Cilia, Elisa; Vuister, Geerten W.; Lenaerts, Tom
2012-01-01
Experimental NMR relaxation studies have shown that peptide binding induces dynamical changes at the side-chain level throughout the second PDZ domain of PTP1e, identifying as such the collection of residues involved in long-range communication. Even though different computational approaches have identified subsets of residues that were qualitatively comparable, no quantitative analysis of the accuracy of these predictions was thus far determined. Here, we show that our information theoretical method produces quantitatively better results with respect to the experimental data than some of these earlier methods. Moreover, it provides a global network perspective on the effect experienced by the different residues involved in the process. We also show that these predictions are consistent within both the human and mouse variants of this domain. Together, these results improve the understanding of intra-protein communication and allostery in PDZ domains, underlining at the same time the necessity of producing similar data sets for further validation of thses kinds of methods. PMID:23209399
Accurate compressed look up table method for CGH in 3D holographic display.
Gao, Chuan; Liu, Juan; Li, Xin; Xue, Gaolei; Jia, Jia; Wang, Yongtian
2015-12-28
Computer generated hologram (CGH) should be obtained with high accuracy and high speed in 3D holographic display, and most researches focus on the high speed. In this paper, a simple and effective computation method for CGH is proposed based on Fresnel diffraction theory and look up table. Numerical simulations and optical experiments are performed to demonstrate its feasibility. The proposed method can obtain more accurate reconstructed images with lower memory usage compared with split look up table method and compressed look up table method without sacrificing the computational speed in holograms generation, so it is called accurate compressed look up table method (AC-LUT). It is believed that AC-LUT method is an effective method to calculate the CGH of 3D objects for real-time 3D holographic display where the huge information data is required, and it could provide fast and accurate digital transmission in various dynamic optical fields in the future.
NASA Astrophysics Data System (ADS)
Bozinoski, Radoslav
Significant research has been performed over the last several years on understanding the unsteady aerodynamics of various fluid flows. Much of this work has focused on quantifying the unsteady, three-dimensional flow field effects which have proven vital to the accurate prediction of many fluid and aerodynamic problems. Up until recently, engineers have predominantly relied on steady-state simulations to analyze the inherently three-dimensional ow structures that are prevalent in many of today's "real-world" problems. Increases in computational capacity and the development of efficient numerical methods can change this and allow for the solution of the unsteady Reynolds-Averaged Navier-Stokes (RANS) equations for practical three-dimensional aerodynamic applications. An integral part of this capability has been the performance and accuracy of the turbulence models coupled with advanced parallel computing techniques. This report begins with a brief literature survey of the role fully three-dimensional, unsteady, Navier-Stokes solvers have on the current state of numerical analysis. Next, the process of creating a baseline three-dimensional Multi-Block FLOw procedure called MBFLO3 is presented. Solutions for an inviscid circular arc bump, laminar at plate, laminar cylinder, and turbulent at plate are then presented. Results show good agreement with available experimental, numerical, and theoretical data. Scalability data for the parallel version of MBFLO3 is presented and shows efficiencies of 90% and higher for processes of no less than 100,000 computational grid points. Next, the description and implementation techniques used for several turbulence models are presented. Following the successful implementation of the URANS and DES procedures, the validation data for separated, non-reattaching flows over a NACA 0012 airfoil, wall-mounted hump, and a wing-body junction geometry are presented. Results for the NACA 0012 showed significant improvement in flow predictions
NASA Technical Reports Server (NTRS)
Constantinescu, G.S.; Lele, S. K.
2000-01-01
The motivation of this work is the ongoing effort at the Center for Turbulence Research (CTR) to use large eddy simulation (LES) techniques to calculate the noise radiated by jet engines. The focus on engine exhaust noise reduction is motivated by the fact that a significant reduction has been achieved over the last decade on the other main sources of acoustic emissions of jet engines, such as the fan and turbomachinery noise, which gives increased priority to jet noise. To be able to propose methods to reduce the jet noise based on results of numerical simulations, one first has to be able to accurately predict the spatio-temporal distribution of the noise sources in the jet. Though a great deal of understanding of the fundamental turbulence mechanisms in high-speed jets was obtained from direct numerical simulations (DNS) at low Reynolds numbers, LES seems to be the only realistic available tool to obtain the necessary near-field information that is required to estimate the acoustic radiation of the turbulent compressible engine exhaust jets. The quality of jet-noise predictions is determined by the accuracy of the numerical method that has to capture the wide range of pressure fluctuations associated with the turbulence in the jet and with the resulting radiated noise, and by the boundary condition treatment and the quality of the mesh. Higher Reynolds numbers and coarser grids put in turn a higher burden on the robustness and accuracy of the numerical method used in this kind of jet LES simulations. As these calculations are often done in cylindrical coordinates, one of the most important requirements for the numerical method is to provide a flow solution that is not contaminated by numerical artifacts. The coordinate singularity is known to be a source of such artifacts. In the present work we use 6th order Pade schemes in the non-periodic directions to discretize the full compressible flow equations. It turns out that the quality of jet-noise predictions
Eiber, Calvin D; Dokos, Socrates; Lovell, Nigel H; Suaning, Gregg J
2016-08-19
The capacity to quickly and accurately simulate extracellular stimulation of neurons is essential to the design of next-generation neural prostheses. Existing platforms for simulating neurons are largely based on finite-difference techniques; due to the complex geometries involved, the more powerful spectral or differential quadrature techniques cannot be applied directly. This paper presents a mathematical basis for the application of a spectral element method to the problem of simulating the extracellular stimulation of retinal neurons, which is readily extensible to neural fibers of any kind. The activating function formalism is extended to arbitrary neuron geometries, and a segmentation method to guarantee an appropriate choice of collocation points is presented. Differential quadrature may then be applied to efficiently solve the resulting cable equations. The capacity for this model to simulate action potentials propagating through branching structures and to predict minimum extracellular stimulation thresholds for individual neurons is demonstrated. The presented model is validated against published values for extracellular stimulation threshold and conduction velocity for realistic physiological parameter values. This model suggests that convoluted axon geometries are more readily activated by extracellular stimulation than linear axon geometries, which may have ramifications for the design of neural prostheses.
Bowler, Michael G.
2017-01-01
The humidity surrounding a sample is an important variable in scientific experiments. Biological samples in particular require not just a humid atmosphere but often a relative humidity (RH) that is in equilibrium with a stabilizing solution required to maintain the sample in the same state during measurements. The controlled dehydration of macromolecular crystals can lead to significant increases in crystal order, leading to higher diffraction quality. Devices that can accurately control the humidity surrounding crystals while monitoring diffraction have led to this technique being increasingly adopted, as the experiments become easier and more reproducible. Matching the RH to the mother liquor is the first step in allowing the stable mounting of a crystal. In previous work [Wheeler, Russi, Bowler & Bowler (2012). Acta Cryst. F68, 111–114], the equilibrium RHs were measured for a range of concentrations of the most commonly used precipitants in macromolecular crystallography and it was shown how these related to Raoult’s law for the equilibrium vapour pressure of water above a solution. However, a discrepancy between the measured values and those predicted by theory could not be explained. Here, a more precise humidity control device has been used to determine equilibrium RH points. The new results are in agreement with Raoult’s law. A simple argument in statistical mechanics is also presented, demonstrating that the equilibrium vapour pressure of a solvent is proportional to its mole fraction in an ideal solution: Raoult’s law. The same argument can be extended to the case where the solvent and solute molecules are of different sizes, as is the case with polymers. The results provide a framework for the correct maintenance of the RH surrounding a sample. PMID:28381983
Bowler, Michael G; Bowler, David R; Bowler, Matthew W
2017-04-01
The humidity surrounding a sample is an important variable in scientific experiments. Biological samples in particular require not just a humid atmosphere but often a relative humidity (RH) that is in equilibrium with a stabilizing solution required to maintain the sample in the same state during measurements. The controlled dehydration of macromolecular crystals can lead to significant increases in crystal order, leading to higher diffraction quality. Devices that can accurately control the humidity surrounding crystals while monitoring diffraction have led to this technique being increasingly adopted, as the experiments become easier and more reproducible. Matching the RH to the mother liquor is the first step in allowing the stable mounting of a crystal. In previous work [Wheeler, Russi, Bowler & Bowler (2012). Acta Cryst. F68, 111-114], the equilibrium RHs were measured for a range of concentrations of the most commonly used precipitants in macromolecular crystallography and it was shown how these related to Raoult's law for the equilibrium vapour pressure of water above a solution. However, a discrepancy between the measured values and those predicted by theory could not be explained. Here, a more precise humidity control device has been used to determine equilibrium RH points. The new results are in agreement with Raoult's law. A simple argument in statistical mechanics is also presented, demonstrating that the equilibrium vapour pressure of a solvent is proportional to its mole fraction in an ideal solution: Raoult's law. The same argument can be extended to the case where the solvent and solute molecules are of different sizes, as is the case with polymers. The results provide a framework for the correct maintenance of the RH surrounding a sample.
Simple, flexible, and accurate phase retrieval method for generalized phase-shifting interferometry.
Yatabe, Kohei; Ishikawa, Kenji; Oikawa, Yasuhiro
2017-01-01
This paper presents a non-iterative phase retrieval method from randomly phase-shifted fringe images. By combining the hyperaccurate least squares ellipse fitting method with the subspace method (usually called the principal component analysis), a fast and accurate phase retrieval algorithm is realized. The proposed method is simple, flexible, and accurate. It can be easily coded without iteration, initial guess, or tuning parameter. Its flexibility comes from the fact that totally random phase-shifting steps and any number of fringe images greater than two are acceptable without any specific treatment. Finally, it is accurate because the hyperaccurate least squares method and the modified subspace method enable phase retrieval with a small error as shown by the simulations. A MATLAB code, which is used in the experimental section, is provided within the paper to demonstrate its simplicity and easiness.
Li, Liqi; Cui, Xiang; Yu, Sanjiu; Zhang, Yuan; Luo, Zhong; Yang, Hua; Zhou, Yue; Zheng, Xiaoqi
2014-01-01
Protein structure prediction is critical to functional annotation of the massively accumulated biological sequences, which prompts an imperative need for the development of high-throughput technologies. As a first and key step in protein structure prediction, protein structural class prediction becomes an increasingly challenging task. Amongst most homological-based approaches, the accuracies of protein structural class prediction are sufficiently high for high similarity datasets, but still far from being satisfactory for low similarity datasets, i.e., below 40% in pairwise sequence similarity. Therefore, we present a novel method for accurate and reliable protein structural class prediction for both high and low similarity datasets. This method is based on Support Vector Machine (SVM) in conjunction with integrated features from position-specific score matrix (PSSM), PROFEAT and Gene Ontology (GO). A feature selection approach, SVM-RFE, is also used to rank the integrated feature vectors through recursively removing the feature with the lowest ranking score. The definitive top features selected by SVM-RFE are input into the SVM engines to predict the structural class of a query protein. To validate our method, jackknife tests were applied to seven widely used benchmark datasets, reaching overall accuracies between 84.61% and 99.79%, which are significantly higher than those achieved by state-of-the-art tools. These results suggest that our method could serve as an accurate and cost-effective alternative to existing methods in protein structural classification, especially for low similarity datasets.
Ydreborg, Magdalena; Lisovskaja, Vera; Lagging, Martin; Brehm Christensen, Peer; Langeland, Nina; Buhl, Mads Rauning; Pedersen, Court; Mørch, Kristine; Wejstål, Rune; Norkrans, Gunnar; Lindh, Magnus; Färkkilä, Martti; Westin, Johan
2014-01-01
Diagnosis of liver cirrhosis is essential in the management of chronic hepatitis C virus (HCV) infection. Liver biopsy is invasive and thus entails a risk of complications as well as a potential risk of sampling error. Therefore, non-invasive diagnostic tools are preferential. The aim of the present study was to create a model for accurate prediction of liver cirrhosis based on patient characteristics and biomarkers of liver fibrosis, including a panel of non-cholesterol sterols reflecting cholesterol synthesis and absorption and secretion. We evaluated variables with potential predictive significance for liver fibrosis in 278 patients originally included in a multicenter phase III treatment trial for chronic HCV infection. A stepwise multivariate logistic model selection was performed with liver cirrhosis, defined as Ishak fibrosis stage 5-6, as the outcome variable. A new index, referred to as Nordic Liver Index (NoLI) in the paper, was based on the model: Log-odds (predicting cirrhosis) = -12.17+ (age × 0.11) + (BMI (kg/m(2)) × 0.23) + (D7-lathosterol (μg/100 mg cholesterol)×(-0.013)) + (Platelet count (x10(9)/L) × (-0.018)) + (Prothrombin-INR × 3.69). The area under the ROC curve (AUROC) for prediction of cirrhosis was 0.91 (95% CI 0.86-0.96). The index was validated in a separate cohort of 83 patients and the AUROC for this cohort was similar (0.90; 95% CI: 0.82-0.98). In conclusion, the new index may complement other methods in diagnosing cirrhosis in patients with chronic HCV infection.
A Foundation for the Accurate Prediction of the Soft Error Vulnerability of Scientific Applications
Bronevetsky, G; de Supinski, B; Schulz, M
2009-02-13
Understanding the soft error vulnerability of supercomputer applications is critical as these systems are using ever larger numbers of devices that have decreasing feature sizes and, thus, increasing frequency of soft errors. As many large scale parallel scientific applications use BLAS and LAPACK linear algebra routines, the soft error vulnerability of these methods constitutes a large fraction of the applications overall vulnerability. This paper analyzes the vulnerability of these routines to soft errors by characterizing how their outputs are affected by injected errors and by evaluating several techniques for predicting how errors propagate from the input to the output of each routine. The resulting error profiles can be used to understand the fault vulnerability of full applications that use these routines.
Searching for Computational Strategies to Accurately Predict pKas of Large Phenolic Derivatives.
Rebollar-Zepeda, Aida Mariana; Campos-Hernández, Tania; Ramírez-Silva, María Teresa; Rojas-Hernández, Alberto; Galano, Annia
2011-08-09
Twenty-two reaction schemes have been tested, within the cluster-continuum model including up to seven explicit water molecules. They have been used in conjunction with nine different methods, within the density functional theory and with second-order Møller-Plesset. The quality of the pKa predictions was found to be strongly dependent on the chosen scheme, while only moderately influenced by the method of calculation. We recommend the E1 reaction scheme [HA + OH(-) (3H2O) ↔ A(-) (H2O) + 3H2O], since it yields mean unsigned errors (MUE) lower than 1 unit of pKa for most of the tested functionals. The best pKa values obtained from this reaction scheme are those involving calculations with PBE0 (MUE = 0.77), TPSS (MUE = 0.82), BHandHLYP (MUE = 0.82), and B3LYP (MUE = 0.86) functionals. This scheme has the additional advantage, compared to the proton exchange method, which also gives very small values of MUE, of being experiment independent. It should be kept in mind, however, that these recommendations are valid within the cluster-continuum model, using the polarizable continuum model in conjunction with the united atom Hartree-Fock cavity and the strategy based on thermodynamic cycles. Changes in any of these aspects of the used methodology may lead to different outcomes.
Method for accurate growth of vertical-cavity surface-emitting lasers
Chalmers, S.A.; Killeen, K.P.; Lear, K.L.
1995-03-14
The authors report a method for accurate growth of vertical-cavity surface-emitting lasers (VCSELs). The method uses a single reflectivity spectrum measurement to determine the structure of the partially completed VCSEL at a critical point of growth. This information, along with the extracted growth rates, allows imprecisions in growth parameters to be compensated for during growth of the remaining structure, which can then be completed with very accurate critical dimensions. Using this method, they can now routinely grow lasing VCSELs with Fabry-Perot cavity resonance wavelengths controlled to within 0.5%. 4 figs.
Method for accurate growth of vertical-cavity surface-emitting lasers
Chalmers, Scott A.; Killeen, Kevin P.; Lear, Kevin L.
1995-01-01
We report a method for accurate growth of vertical-cavity surface-emitting lasers (VCSELs). The method uses a single reflectivity spectrum measurement to determine the structure of the partially completed VCSEL at a critical point of growth. This information, along with the extracted growth rates, allows imprecisions in growth parameters to be compensated for during growth of the remaining structure, which can then be completed with very accurate critical dimensions. Using this method, we can now routinely grow lasing VCSELs with Fabry-Perot cavity resonance wavelengths controlled to within 0.5%.
Shakibaee, Abolfazl; Faghihzadeh, Soghrat; Alishiri, Gholam Hossein; Ebrahimpour, Zeynab; Faradjzadeh, Shahram; Sobhani, Vahid; Asgari, Alireza
2015-01-01
Background: The body composition varies according to different life styles (i.e. intake calories and caloric expenditure). Therefore, it is wise to record military personnel’s body composition periodically and encourage those who abide to the regulations. Different methods have been introduced for body composition assessment: invasive and non-invasive. Amongst them, the Jackson and Pollock equation is most popular. Objectives: The recommended anthropometric prediction equations for assessing men’s body composition were compared with dual-energy X-ray absorptiometry (DEXA) gold standard to develop a modified equation to assess body composition and obesity quantitatively among Iranian military men. Patients and Methods: A total of 101 military men aged 23 - 52 years old with a mean age of 35.5 years were recruited and evaluated in the present study (average height, 173.9 cm and weight, 81.5 kg). The body-fat percentages of subjects were assessed both with anthropometric assessment and DEXA scan. The data obtained from these two methods were then compared using multiple regression analysis. Results: The mean and standard deviation of body fat percentage of the DEXA assessment was 21.2 ± 4.3 and body fat percentage obtained from three Jackson and Pollock 3-, 4- and 7-site equations were 21.1 ± 5.8, 22.2 ± 6.0 and 20.9 ± 5.7, respectively. There was a strong correlation between these three equations and DEXA (R² = 0.98). Conclusions: The mean percentage of body fat obtained from the three equations of Jackson and Pollock was very close to that of body fat obtained from DEXA; however, we suggest using a modified Jackson-Pollock 3-site equation for volunteer military men because the 3-site equation analysis method is simpler and faster than other methods. PMID:26715964
NASA Astrophysics Data System (ADS)
Liang, Yufeng; Vinson, John; Pemmaraju, Sri; Drisdell, Walter S.; Shirley, Eric L.; Prendergast, David
2017-03-01
Constrained-occupancy delta-self-consistent-field (Δ SCF ) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1 s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The Δ SCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle Δ SCF approach can be rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.
NASA Astrophysics Data System (ADS)
Rajab, Jasim M.; MatJafri, M. Z.; Lim, H. S.
2013-06-01
This study encompasses columnar ozone modelling in the peninsular Malaysia. Data of eight atmospheric parameters [air surface temperature (AST), carbon monoxide (CO), methane (CH4), water vapour (H2Ovapour), skin surface temperature (SSKT), atmosphere temperature (AT), relative humidity (RH), and mean surface pressure (MSP)] data set, retrieved from NASA's Atmospheric Infrared Sounder (AIRS), for the entire period (2003-2008) was employed to develop models to predict the value of columnar ozone (O3) in study area. The combined method, which is based on using both multiple regressions combined with principal component analysis (PCA) modelling, was used to predict columnar ozone. This combined approach was utilized to improve the prediction accuracy of columnar ozone. Separate analysis was carried out for north east monsoon (NEM) and south west monsoon (SWM) seasons. The O3 was negatively correlated with CH4, H2Ovapour, RH, and MSP, whereas it was positively correlated with CO, AST, SSKT, and AT during both the NEM and SWM season periods. Multiple regression analysis was used to fit the columnar ozone data using the atmospheric parameter's variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to acquire subsets of the predictor variables to be comprised in the linear regression model of the atmospheric parameter's variables. It was found that the increase in columnar O3 value is associated with an increase in the values of AST, SSKT, AT, and CO and with a drop in the levels of CH4, H2Ovapour, RH, and MSP. The result of fitting the best models for the columnar O3 value using eight of the independent variables gave about the same values of the R (≈0.93) and R2 (≈0.86) for both the NEM and SWM seasons. The common variables that appeared in both regression equations were SSKT, CH4 and RH, and the principal precursor of the columnar O3 value in both the NEM and SWM seasons was SSKT.
How Accurate Is the Prediction of Maximal Oxygen Uptake with Treadmill Testing?
Wicks, John R.; Oldridge, Neil B.
2016-01-01
Background Cardiorespiratory fitness measured by treadmill testing has prognostic significance in determining mortality with cardiovascular and other chronic disease states. The accuracy of a recently developed method for estimating maximal oxygen uptake (VO2peak), the heart rate index (HRI), is dependent only on heart rate (HR) and was tested against oxygen uptake (VO2), either measured or predicted from conventional treadmill parameters (speed, incline, protocol time). Methods The HRI equation, METs = 6 x HRI– 5, where HRI = maximal HR/resting HR, provides a surrogate measure of VO2peak. Forty large scale treadmill studies were identified through a systematic search using MEDLINE, Google Scholar and Web of Science in which VO2peak was either measured (TM-VO2meas; n = 20) or predicted (TM-VO2pred; n = 20) based on treadmill parameters. All studies were required to have reported group mean data of both resting and maximal HRs for determination of HR index-derived oxygen uptake (HRI-VO2). Results The 20 studies with measured VO2 (TM-VO2meas), involved 11,477 participants (median 337) with a total of 105,044 participants (median 3,736) in the 20 studies with predicted VO2 (TM-VO2pred). A difference of only 0.4% was seen between mean (±SD) VO2peak for TM- VO2meas and HRI-VO2 (6.51±2.25 METs and 6.54±2.28, respectively; p = 0.84). In contrast, there was a highly significant 21.1% difference between mean (±SD) TM-VO2pred and HRI-VO2 (8.12±1.85 METs and 6.71±1.92, respectively; p<0.001). Conclusion Although mean TM-VO2meas and HRI-VO2 were almost identical, mean TM-VO2pred was more than 20% greater than mean HRI-VO2. PMID:27875547
Prediction of ozone concentrations using nonlinear prediction method
NASA Astrophysics Data System (ADS)
Abd Hamid, Nor Zila; Md Noorani, Mohd Salmi; Juneng, Liew; Latif, Mohd Talib
2013-04-01
Prediction of ozone (O3) is very important because O3 gives effects on human health, human activities and more. Nonlinear prediction method, a method which was developed based on the idea comes from chaos theory is used to predict the concentrations of O3. There are two steps in the nonlinear prediction method. First is the reconstruction of the observed data from the form of a one-dimensional to multi-dimensional phase space. Second is the prediction of the reconstructed phase space through local linear approximation method. Hourly O3 concentrations observed in Shah Alam city located in the state of Selangor, Malaysia were studied. Predictions found in a close agreement with those observed ones. The value of the correlation coefficient obtained in this study is 0.9097. This demonstrates the suitability of the nonlinear prediction method to predict the hourly concentrations of O3. At the end of this paper, suggestions were made for better prediction in the future.
Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy
NASA Astrophysics Data System (ADS)
Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui
2014-06-01
Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.
A new method to synthesize competitor RNAs for accurate analyses by competitive RT-PCR.
Ishibashi, O
1997-12-03
A method to synthesize competitor RNAs as internal standards for competitive RT-PCR is improved by using the long accurate PCR (LA-PCR) technique. Competitor templates synthesized by the new method are almost the same in length, and possibly in secondary structure, as target mRNAs to be quantified except that they include the short deletion within the segments to be amplified. This allows the reverse transcription to be achieved with almost the same efficiency from both target mRNAs and competitor RNAs. Therefore, more accurate quantification can be accomplished by using such competitor RNAs.
Predictive sensor method and apparatus
NASA Technical Reports Server (NTRS)
Nail, William L. (Inventor); Koger, Thomas L. (Inventor); Cambridge, Vivien (Inventor)
1990-01-01
A predictive algorithm is used to determine, in near real time, the steady state response of a slow responding sensor such as hydrogen gas sensor of the type which produces an output current proportional to the partial pressure of the hydrogen present. A microprocessor connected to the sensor samples the sensor output at small regular time intervals and predicts the steady state response of the sensor in response to a perturbation in the parameter being sensed, based on the beginning and end samples of the sensor output for the current sample time interval.
Towards more accurate wind and solar power prediction by improving NWP model physics
NASA Astrophysics Data System (ADS)
Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo
2014-05-01
The growing importance and successive expansion of renewable energies raise new challenges for decision makers, economists, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the errors and provide an a priori estimate of remaining uncertainties associated with the large share of weather-dependent power sources. For this purpose it is essential to optimize NWP model forecasts with respect to those prognostic variables which are relevant for wind and solar power plants. An improved weather forecast serves as the basis for a sophisticated power forecasts. Consequently, a well-timed energy trading on the stock market, and electrical grid stability can be maintained. The German Weather Service (DWD) currently is involved with two projects concerning research in the field of renewable energy, namely ORKA*) and EWeLiNE**). Whereas the latter is in collaboration with the Fraunhofer Institute (IWES), the project ORKA is led by energy & meteo systems (emsys). Both cooperate with German transmission system operators. The goal of the projects is to improve wind and photovoltaic (PV) power forecasts by combining optimized NWP and enhanced power forecast models. In this context, the German Weather Service aims to improve its model system, including the ensemble forecasting system, by working on data assimilation, model physics and statistical post processing. This presentation is focused on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. First steps leading to improved physical parameterization schemes within the NWP-model are presented. Wind mast measurements reaching up to 200 m height above ground are used for the estimation of the (NWP) wind forecast error at heights relevant for wind energy plants. One particular problem is the daily cycle in wind speed. The transition from stable stratification during
An accurate method of extracting fat droplets in liver images for quantitative evaluation
NASA Astrophysics Data System (ADS)
Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie
2015-03-01
The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.
Slodownik, Dan; Grinberg, Igor; Spira, Ram M; Skornik, Yehuda; Goldstein, Ronald S
2009-04-01
The current standard method for predicting contact allergenicity is the murine local lymph node assay (LLNA). Public objection to the use of animals in testing of cosmetics makes the development of a system that does not use sentient animals highly desirable. The chorioallantoic membrane (CAM) of the chick egg has been extensively used for the growth of normal and transformed mammalian tissues. The CAM is not innervated, and embryos are sacrificed before the development of pain perception. The aim of this study was to determine whether the sensitization phase of contact dermatitis to known cosmetic allergens can be quantified using CAM-engrafted human skin and how these results compare with published EC3 data obtained with the LLNA. We studied six common molecules used in allergen testing and quantified migration of epidermal Langerhans cells (LC) as a measure of their allergic potency. All agents with known allergic potential induced statistically significant migration of LC. The data obtained correlated well with published data for these allergens generated using the LLNA test. The human-skin CAM model therefore has great potential as an inexpensive, non-radioactive, in vivo alternative to the LLNA, which does not require the use of sentient animals. In addition, this system has the advantage of testing the allergic response of human, rather than animal skin.
A machine learning approach to the accurate prediction of multi-leaf collimator positional errors
NASA Astrophysics Data System (ADS)
Carlson, Joel N. K.; Park, Jong Min; Park, So-Yeon; In Park, Jong; Choi, Yunseok; Ye, Sung-Joon
2016-03-01
Discrepancies between planned and delivered movements of multi-leaf collimators (MLCs) are an important source of errors in dose distributions during radiotherapy. In this work we used machine learning techniques to train models to predict these discrepancies, assessed the accuracy of the model predictions, and examined the impact these errors have on quality assurance (QA) procedures and dosimetry. Predictive leaf motion parameters for the models were calculated from the plan files, such as leaf position and velocity, whether the leaf was moving towards or away from the isocenter of the MLC, and many others. Differences in positions between synchronized DICOM-RT planning files and DynaLog files reported during QA delivery were used as a target response for training of the models. The final model is capable of predicting MLC positions during delivery to a high degree of accuracy. For moving MLC leaves, predicted positions were shown to be significantly closer to delivered positions than were planned positions. By incorporating predicted positions into dose calculations in the TPS, increases were shown in gamma passing rates against measured dose distributions recorded during QA delivery. For instance, head and neck plans with 1%/2 mm gamma criteria had an average increase in passing rate of 4.17% (SD = 1.54%). This indicates that the inclusion of predictions during dose calculation leads to a more realistic representation of plan delivery. To assess impact on the patient, dose volumetric histograms (DVH) using delivered positions were calculated for comparison with planned and predicted DVHs. In all cases, predicted dose volumetric parameters were in closer agreement to the delivered parameters than were the planned parameters, particularly for organs at risk on the periphery of the treatment area. By incorporating the predicted positions into the TPS, the treatment planner is given a more realistic view of the dose distribution as it will truly be
The U.S. Department of Agriculture Automated Multiple-Pass Method accurately assesses sodium intakes
Technology Transfer Automated Retrieval System (TEKTRAN)
Accurate and practical methods to monitor sodium intake of the U.S. population are critical given current sodium reduction strategies. While the gold standard for estimating sodium intake is the 24 hour urine collection, few studies have used this biomarker to evaluate the accuracy of a dietary ins...
LSimpute: accurate estimation of missing values in microarray data with least squares methods.
Bø, Trond Hellem; Dysvik, Bjarte; Jonassen, Inge
2004-02-20
Microarray experiments generate data sets with information on the expression levels of thousands of genes in a set of biological samples. Unfortunately, such experiments often produce multiple missing expression values, normally due to various experimental problems. As many algorithms for gene expression analysis require a complete data matrix as input, the missing values have to be estimated in order to analyze the available data. Alternatively, genes and arrays can be removed until no missing values remain. However, for genes or arrays with only a small number of missing values, it is desirable to impute those values. For the subsequent analysis to be as informative as possible, it is essential that the estimates for the missing gene expression values are accurate. A small amount of badly estimated missing values in the data might be enough for clustering methods, such as hierachical clustering or K-means clustering, to produce misleading results. Thus, accurate methods for missing value estimation are needed. We present novel methods for estimation of missing values in microarray data sets that are based on the least squares principle, and that utilize correlations between both genes and arrays. For this set of methods, we use the common reference name LSimpute. We compare the estimation accuracy of our methods with the widely used KNNimpute on three complete data matrices from public data sets by randomly knocking out data (labeling as missing). From these tests, we conclude that our LSimpute methods produce estimates that consistently are more accurate than those obtained using KNNimpute. Additionally, we examine a more classic approach to missing value estimation based on expectation maximization (EM). We refer to our EM implementations as EMimpute, and the estimate errors using the EMimpute methods are compared with those our novel methods produce. The results indicate that on average, the estimates from our best performing LSimpute method are at least as
On an efficient and accurate method to integrate restricted three-body orbits
NASA Technical Reports Server (NTRS)
Murison, Marc A.
1989-01-01
This work is a quantitative analysis of the advantages of the Bulirsch-Stoer (1966) method, demonstrating that this method is certainly worth considering when working with small N dynamical systems. The results, qualitatively suspected by many users, are quantitatively confirmed as follows: (1) the Bulirsch-Stoer extrapolation method is very fast and moderately accurate; (2) regularization of the equations of motion stabilizes the error behavior of the method and is, of course, essential during close approaches; and (3) when applicable, a manifold-correction algorithm reduces numerical errors to the limits of machine accuracy. In addition, for the specific case of the restricted three-body problem, even a small eccentricity for the orbit of the primaries drastically affects the accuracy of integrations, whether regularized or not; the circular restricted problem integrates much more accurately.
Accurate determination of specific heat at high temperatures using the flash diffusivity method
NASA Technical Reports Server (NTRS)
Vandersande, J. W.; Zoltan, A.; Wood, C.
1989-01-01
The flash diffusivity method of Parker et al. (1961) was used to measure accurately the specific heat of test samples simultaneously with thermal diffusivity, thus obtaining the thermal conductivity of these materials directly. The accuracy of data obtained on two types of materials (n-type silicon-germanium alloys and niobium), was + or - 3 percent. It is shown that the method is applicable up to at least 1300 K.
Sensor data fusion for accurate cloud presence prediction using Dempster-Shafer evidence theory.
Li, Jiaming; Luo, Suhuai; Jin, Jesse S
2010-01-01
Sensor data fusion technology can be used to best extract useful information from multiple sensor observations. It has been widely applied in various applications such as target tracking, surveillance, robot navigation, signal and image processing. This paper introduces a novel data fusion approach in a multiple radiation sensor environment using Dempster-Shafer evidence theory. The methodology is used to predict cloud presence based on the inputs of radiation sensors. Different radiation data have been used for the cloud prediction. The potential application areas of the algorithm include renewable power for virtual power station where the prediction of cloud presence is the most challenging issue for its photovoltaic output. The algorithm is validated by comparing the predicted cloud presence with the corresponding sunshine occurrence data that were recorded as the benchmark. Our experiments have indicated that comparing to the approaches using individual sensors, the proposed data fusion approach can increase correct rate of cloud prediction by ten percent, and decrease unknown rate of cloud prediction by twenty three percent.
Interim prediction method for jet noise
NASA Technical Reports Server (NTRS)
Stone, J. R.
1974-01-01
A method is provided for predicting jet noise for a wide range of nozzle geometries and operating conditions of interest for aircraft engines. Jet noise theory, data and existing prediction methods was reviewed, and based on this information a interim method of jet noise prediction is proposed. Problem areas are idenified where further research is needed to improve the prediction method. This method predicts only the noise generated by the exhaust jets mixing with the surrounding air and does not include other noises emanating from the engine exhaust, such as combustion and machinery noise generated inside the engine (i.e., core noise). It does, however, include thrust reverser noise. Prediction relations are provided for conical nozzles, plug nozzles, coaxial nozzles and slot nozzles.
Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke
2015-11-15
Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was
Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke
2015-01-01
Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was
Margot Gerritsen
2008-10-31
Gas-injection processes are widely and increasingly used for enhanced oil recovery (EOR). In the United States, for example, EOR production by gas injection accounts for approximately 45% of total EOR production and has tripled since 1986. The understanding of the multiphase, multicomponent flow taking place in any displacement process is essential for successful design of gas-injection projects. Due to complex reservoir geometry, reservoir fluid properties and phase behavior, the design of accurate and efficient numerical simulations for the multiphase, multicomponent flow governing these processes is nontrivial. In this work, we developed, implemented and tested a streamline based solver for gas injection processes that is computationally very attractive: as compared to traditional Eulerian solvers in use by industry it computes solutions with a computational speed orders of magnitude higher and a comparable accuracy provided that cross-flow effects do not dominate. We contributed to the development of compositional streamline solvers in three significant ways: improvement of the overall framework allowing improved streamline coverage and partial streamline tracing, amongst others; parallelization of the streamline code, which significantly improves wall clock time; and development of new compositional solvers that can be implemented along streamlines as well as in existing Eulerian codes used by industry. We designed several novel ideas in the streamline framework. First, we developed an adaptive streamline coverage algorithm. Adding streamlines locally can reduce computational costs by concentrating computational efforts where needed, and reduce mapping errors. Adapting streamline coverage effectively controls mass balance errors that mostly result from the mapping from streamlines to pressure grid. We also introduced the concept of partial streamlines: streamlines that do not necessarily start and/or end at wells. This allows more efficient coverage and avoids
Kim, Minseung; Rai, Navneet; Zorraquino, Violeta; Tagkopoulos, Ilias
2016-01-01
A significant obstacle in training predictive cell models is the lack of integrated data sources. We develop semi-supervised normalization pipelines and perform experimental characterization (growth, transcriptional, proteome) to create Ecomics, a consistent, quality-controlled multi-omics compendium for Escherichia coli with cohesive meta-data information. We then use this resource to train a multi-scale model that integrates four omics layers to predict genome-wide concentrations and growth dynamics. The genetic and environmental ontology reconstructed from the omics data is substantially different and complementary to the genetic and chemical ontologies. The integration of different layers confers an incremental increase in the prediction performance, as does the information about the known gene regulatory and protein-protein interactions. The predictive performance of the model ranges from 0.54 to 0.87 for the various omics layers, which far exceeds various baselines. This work provides an integrative framework of omics-driven predictive modelling that is broadly applicable to guide biological discovery. PMID:27713404
NASA Astrophysics Data System (ADS)
Lau, K.-C.; Ng, C. Y.
2006-01-01
The ionization energies (IEs) for the 2-propyl (2-C3H7), phenyl (C6H5), and benzyl (C6H5CH2) radicals have been calculated by the wave-function-based ab initio CCSD(T)/CBS approach, which involves the approximation to the complete basis set (CBS) limit at the coupled cluster level with single and double excitations plus quasiperturbative triple excitation [CCSD(T)]. The zero-point vibrational energy correction, the core-valence electronic correction, and the scalar relativistic effect correction have been also made in these calculations. Although a precise IE value for the 2-C3H7 radical has not been directly determined before due to the poor Franck-Condon factor for the photoionization transition at the ionization threshold, the experimental value deduced indirectly using other known energetic data is found to be in good accord with the present CCSD(T)/CBS prediction. The comparison between the predicted value through the focal-point analysis and the highly precise experimental value for the IE(C6H5CH2) determined in the previous pulsed field ionization photoelectron (PFI-PE) study shows that the CCSD(T)/CBS method is capable of providing an accurate IE prediction for C6H5CH2, achieving an error limit of 35 meV. The benchmarking of the CCSD(T)/CBS IE(C6H5CH2) prediction suggests that the CCSD(T)/CBS IE(C6H5) prediction obtained here has a similar accuracy of 35 meV. Taking into account this error limit for the CCSD(T)/CBS prediction and the experimental uncertainty, the CCSD(T)/CBS IE(C6H5) value is also consistent with the IE(C6H5) reported in the previous HeI photoelectron measurement. Furthermore, the present study provides support for the conclusion that the CCSD(T)/CBS approach with high-level energy corrections can be used to provide reliable IE predictions for C3-C7 hydrocarbon radicals with an uncertainty of +/-35 meV. Employing the atomization scheme, we have also computed the 0 K (298 K) heats of formation in kJ/mol at the CCSD(T)/CBS level for 2-C3H7
NASA Astrophysics Data System (ADS)
Dale, Andy; Stolpovsky, Konstantin; Wallmann, Klaus
2016-04-01
The recycling and burial of biogenic material in the sea floor plays a key role in the regulation of ocean chemistry. Proper consideration of these processes in ocean biogeochemical models is becoming increasingly recognized as an important step in model validation and prediction. However, the rate of organic matter remineralization in sediments and the benthic flux of redox-sensitive elements are difficult to predict a priori. In this communication, examples of empirical benthic flux models that can be coupled to earth system models to predict sediment-water exchange in the open ocean are presented. Large uncertainties hindering further progress in this field include knowledge of the reactivity of organic carbon reaching the sediment, the importance of episodic variability in bottom water chemistry and particle rain rates (for both the deep-sea and margins) and the role of benthic fauna. How do we meet the challenge?
Koot, Yvonne E. M.; van Hooff, Sander R.; Boomsma, Carolien M.; van Leenen, Dik; Groot Koerkamp, Marian J. A.; Goddijn, Mariëtte; Eijkemans, Marinus J. C.; Fauser, Bart C. J. M.; Holstege, Frank C. P.; Macklon, Nick S.
2016-01-01
The primary limiting factor for effective IVF treatment is successful embryo implantation. Recurrent implantation failure (RIF) is a condition whereby couples fail to achieve pregnancy despite consecutive embryo transfers. Here we describe the collection of gene expression profiles from mid-luteal phase endometrial biopsies (n = 115) from women experiencing RIF and healthy controls. Using a signature discovery set (n = 81) we identify a signature containing 303 genes predictive of RIF. Independent validation in 34 samples shows that the gene signature predicts RIF with 100% positive predictive value (PPV). The strength of the RIF associated expression signature also stratifies RIF patients into distinct groups with different subsequent implantation success rates. Exploration of the expression changes suggests that RIF is primarily associated with reduced cellular proliferation. The gene signature will be of value in counselling and guiding further treatment of women who fail to conceive upon IVF and suggests new avenues for developing intervention. PMID:26797113
NASA Astrophysics Data System (ADS)
Ban, Yunyun; Chen, Tianqin; Yan, Jun; Lei, Tingwu
2017-04-01
The measurement of sediment concentration in water is of great importance in soil erosion research and soil and water loss monitoring systems. The traditional weighing method has long been the foundation of all the other measuring methods and instrument calibration. The development of a new method to replace the traditional oven-drying method is of interest in research and practice for the quick and efficient measurement of sediment concentration, especially field measurements. A new method is advanced in this study for accurately measuring the sediment concentration based on the accurate measurement of the mass of the sediment-water mixture in the confined constant volume container (CVC). A sediment-laden water sample is put into the CVC to determine its mass before the CVC is filled with water and weighed again for the total mass of the water and sediments in the container. The known volume of the CVC, the mass of sediment-laden water, and sediment particle density are used to calculate the mass of water, which is replaced by sediments, therefore sediment concentration of the sample is calculated. The influence of water temperature was corrected by measuring water density to determine the temperature of water before measurements were conducted. The CVC was used to eliminate the surface tension effect so as to obtain the accurate volume of water and sediment mixture. Experimental results showed that the method was capable of measuring the sediment concentration from 0.5 up to 1200 kg m‑3. A good liner relationship existed between the designed and measured sediment concentrations with all the coefficients of determination greater than 0.999 and the averaged relative error less than 0.2%. All of these seem to indicate that the new method is capable of measuring a full range of sediment concentration above 0.5 kg m‑3 to replace the traditional oven-drying method as a standard method for evaluating and calibrating other methods.
Dynamics of Flexible MLI-type Debris for Accurate Orbit Prediction
2014-09-01
SUBJECT TERMS EOARD, orbital debris , HAMR objects, multi-layered insulation, orbital dynamics, orbit predictions, orbital propagation 16. SECURITY...illustration are orbital debris [Souce: NASA...piece of space junk (a paint fleck) during the STS-7 mission (Photo: NASA Orbital Debris Program Office
Stempler, Shiri; Waldman, Yedael Y; Wolf, Lior; Ruppin, Eytan
2012-09-01
Numerous metabolic alterations are associated with the impairment of brain cells in Alzheimer's disease (AD). Here we use gene expression microarrays of both whole hippocampus tissue and hippocampal neurons of AD patients to investigate the ability of metabolic gene expression to predict AD progression and its cognitive decline. We find that the prediction accuracy of different AD stages is markedly higher when using neuronal expression data (0.9) than when using whole tissue expression (0.76). Furthermore, the metabolic genes' expression is shown to be as effective in predicting AD severity as the entire gene list. Remarkably, a regression model from hippocampal metabolic gene expression leads to a marked correlation of 0.57 with the Mini-Mental State Examination cognitive score. Notably, the expression of top predictive neuronal genes in AD is significantly higher than that of other metabolic genes in the brains of healthy subjects. All together, the analyses point to a subset of metabolic genes that is strongly associated with normal brain functioning and whose disruption plays a major role in AD.
Predicting repeat self-harm in children--how accurate can we expect to be?
Chitsabesan, Prathiba; Harrington, Richard; Harrington, Valerie; Tomenson, Barbara
2003-01-01
The main objective of the study was to find which variables predict repetition of deliberate self-harm in children. The study is based on a group of children who took part in a randomized control trial investigating the effects of a home-based family intervention for children who had deliberately poisoned themselves. These children had a range of baseline and outcome measures collected on two occasions (two and six months follow-up). Outcome data were collected from 149 (92 %) of the initial 162 children over the six months. Twenty-three children made a further deliberate self-harm attempt within the follow-up period. A number of variables at baseline were found to be significantly associated with repeat self-harm. Parental mental health and a history of previous attempts were the strongest predictors. A model of prediction of further deliberate self-harm combining these significant individual variables produced a high positive predictive value (86 %) but had low sensitivity (28 %). Predicting repeat self-harm in children is difficult, even with a comprehensive series of assessments over multiple time points, and we need to adapt services with this in mind. We propose a model of service provision which takes these findings into account.
Comparison of Predictive Modeling Methods of Aircraft Landing Speed
NASA Technical Reports Server (NTRS)
Diallo, Ousmane H.
2012-01-01
Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.
Accuracy of Wind Prediction Methods in the California Sea Breeze
NASA Astrophysics Data System (ADS)
Sumers, B. D.; Dvorak, M. J.; Ten Hoeve, J. E.; Jacobson, M. Z.
2010-12-01
In this study, we investigate the accuracy of measure-correlate-predict (MCP) algorithms and log law/power law scaling using data from two tall towers in coastal environments. We find that MCP algorithms accurately predict sea breeze winds and that log law/power law scaling methods struggle to predict 50-meter wind speeds. MCP methods have received significant attention as the wind industry has grown and the ability to accurately characterize the wind resource has become valuable. These methods are used to produce longer-term wind speed records from short-term measurement campaigns. A correlation is developed between the “target site,” where the developer is interested in building wind turbines, and a “reference site,” where long-term wind data is available. Up to twenty years of prior wind speeds are then are predicted. In this study, two existing MCP methods - linear regression and Mortimer’s method - are applied to predict 50-meter wind speeds at sites in the Salinas Valley and Redwood City, CA. The predictions are then verified with tall tower data. It is found that linear regression is poorly suited to MCP applications as the process produces inaccurate estimates of the cube of the wind speed at 50 meters. Meanwhile, Mortimer’s method, which bins data by direction and speed, is found to accurately predict the cube of the wind speed in both sea breeze and non-sea breeze conditions. We also find that log and power law are unstable predictors of wind speeds. While these methods produced accurate estimates of the average 50-meter wind speed at both sites, they predicted an average cube of the wind speed that was between 1.3 and 1.18 times the observed value. Inspection of time-series error reveals increased error in the mid-afternoon of the summer. This suggests that the cold sea breeze may disrupt the vertical temperature profile, create a stable atmosphere and violate the assumptions that allow log law scaling to work.
A safe and accurate method to perform esthetic mandibular contouring surgery for Far Eastern Asians.
Hsieh, A M-C; Huon, L-K; Jiang, H-R; Liu, S Y-C
2017-05-01
A tapered mandibular contour is popular with Far Eastern Asians. This study describes a safe and accurate method of using preoperative virtual surgical planning (VSP) and an intraoperative ostectomy guide to maximize the esthetic outcomes of mandibular symmetry and tapering while mitigating injury to the inferior alveolar nerve (IAN). Twelve subjects with chief complaints of a wide and square lower face underwent this protocol from January to June 2015. VSP was used to confirm symmetry and preserve the IAN while maximizing the surgeon's ability to taper the lower face via mandibular inferior border ostectomy. The accuracy of this method was confirmed by superimposition of the perioperative computed tomography scans in all subjects. No subjects complained of prolonged paresthesia after 3 months. A safe and accurate protocol for achieving an esthetic lower face in indicated Far Eastern individuals is described.
FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues.
El-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant
2016-01-01
A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein
Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data.
Pagán, Josué; De Orbe, M Irene; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L; Mora, J Vivancos; Moya, José M; Ayala, José L
2015-06-30
Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives.
Accurate structure prediction of peptide–MHC complexes for identifying highly immunogenic antigens
Park, Min-Sun; Park, Sung Yong; Miller, Keith R.; Collins, Edward J.; Lee, Ha Youn
2013-11-01
Designing an optimal HIV-1 vaccine faces the challenge of identifying antigens that induce a broad immune capacity. One factor to control the breadth of T cell responses is the surface morphology of a peptide–MHC complex. Here, we present an in silico protocol for predicting peptide–MHC structure. A robust signature of a conformational transition was identified during all-atom molecular dynamics, which results in a model with high accuracy. A large test set was used in constructing our protocol and we went another step further using a blind test with a wild-type peptide and two highly immunogenic mutants, which predicted substantial conformational changes in both mutants. The center residues at position five of the analogs were configured to be accessible to solvent, forming a prominent surface, while the residue of the wild-type peptide was to point laterally toward the side of the binding cleft. We then experimentally determined the structures of the blind test set, using high resolution of X-ray crystallography, which verified predicted conformational changes. Our observation strongly supports a positive association of the surface morphology of a peptide–MHC complex to its immunogenicity. Our study offers the prospect of enhancing immunogenicity of vaccines by identifying MHC binding immunogens.
Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data
Pagán, Josué; Irene De Orbe, M.; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L.; Vivancos Mora, J.; Moya, José M.; Ayala, José L.
2015-01-01
Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103
Accurate prediction of drug-induced liver injury using stem cell-derived populations.
Szkolnicka, Dagmara; Farnworth, Sarah L; Lucendo-Villarin, Baltasar; Storck, Christopher; Zhou, Wenli; Iredale, John P; Flint, Oliver; Hay, David C
2014-02-01
Despite major progress in the knowledge and management of human liver injury, there are millions of people suffering from chronic liver disease. Currently, the only cure for end-stage liver disease is orthotopic liver transplantation; however, this approach is severely limited by organ donation. Alternative approaches to restoring liver function have therefore been pursued, including the use of somatic and stem cell populations. Although such approaches are essential in developing scalable treatments, there is also an imperative to develop predictive human systems that more effectively study and/or prevent the onset of liver disease and decompensated organ function. We used a renewable human stem cell resource, from defined genetic backgrounds, and drove them through developmental intermediates to yield highly active, drug-inducible, and predictive human hepatocyte populations. Most importantly, stem cell-derived hepatocytes displayed equivalence to primary adult hepatocytes, following incubation with known hepatotoxins. In summary, we have developed a serum-free, scalable, and shippable cell-based model that faithfully predicts the potential for human liver injury. Such a resource has direct application in human modeling and, in the future, could play an important role in developing renewable cell-based therapies.
An accurate and practical method for inference of weak gravitational lensing from galaxy images
NASA Astrophysics Data System (ADS)
Bernstein, Gary M.; Armstrong, Robert; Krawiec, Christina; March, Marisa C.
2016-07-01
We demonstrate highly accurate recovery of weak gravitational lensing shear using an implementation of the Bayesian Fourier Domain (BFD) method proposed by Bernstein & Armstrong, extended to correct for selection biases. The BFD formalism is rigorously correct for Nyquist-sampled, background-limited, uncrowded images of background galaxies. BFD does not assign shapes to galaxies, instead compressing the pixel data D into a vector of moments M, such that we have an analytic expression for the probability P(M|g) of obtaining the observations with gravitational lensing distortion g along the line of sight. We implement an algorithm for conducting BFD's integrations over the population of unlensed source galaxies which measures ≈10 galaxies s-1 core-1 with good scaling properties. Initial tests of this code on ≈109 simulated lensed galaxy images recover the simulated shear to a fractional accuracy of m = (2.1 ± 0.4) × 10-3, substantially more accurate than has been demonstrated previously for any generally applicable method. Deep sky exposures generate a sufficiently accurate approximation to the noiseless, unlensed galaxy population distribution assumed as input to BFD. Potential extensions of the method include simultaneous measurement of magnification and shear; multiple-exposure, multiband observations; and joint inference of photometric redshifts and lensing tomography.
Design of accurate predictors for DNA-binding sites in proteins using hybrid SVM-PSSM method.
Ho, Shinn-Ying; Yu, Fu-Chieh; Chang, Chia-Yun; Huang, Hui-Ling
2007-01-01
In this paper, we investigate the design of accurate predictors for DNA-binding sites in proteins from amino acid sequences. As a result, we propose a hybrid method using support vector machine (SVM) in conjunction with evolutionary information of amino acid sequences in terms of their position-specific scoring matrices (PSSMs) for prediction of DNA-binding sites. Considering the numbers of binding and non-binding residues in proteins are significantly unequal, two additional weights as well as SVM parameters are analyzed and adopted to maximize net prediction (NP, an average of sensitivity and specificity) accuracy. To evaluate the generalization ability of the proposed method SVM-PSSM, a DNA-binding dataset PDC-59 consisting of 59 protein chains with low sequence identity on each other is additionally established. The SVM-based method using the same six-fold cross-validation procedure and PSSM features has NP=80.15% for the training dataset PDNA-62 and NP=69.54% for the test dataset PDC-59, which are much better than the existing neural network-based method by increasing the NP values for training and test accuracies up to 13.45% and 16.53%, respectively. Simulation results reveal that SVM-PSSM performs well in predicting DNA-binding sites of novel proteins from amino acid sequences.
NASA Astrophysics Data System (ADS)
Nissley, Daniel A.; Sharma, Ajeet K.; Ahmed, Nabeel; Friedrich, Ulrike A.; Kramer, Günter; Bukau, Bernd; O'Brien, Edward P.
2016-02-01
The rates at which domains fold and codons are translated are important factors in determining whether a nascent protein will co-translationally fold and function or misfold and malfunction. Here we develop a chemical kinetic model that calculates a protein domain's co-translational folding curve during synthesis using only the domain's bulk folding and unfolding rates and codon translation rates. We show that this model accurately predicts the course of co-translational folding measured in vivo for four different protein molecules. We then make predictions for a number of different proteins in yeast and find that synonymous codon substitutions, which change translation-elongation rates, can switch some protein domains from folding post-translationally to folding co-translationally--a result consistent with previous experimental studies. Our approach explains essential features of co-translational folding curves and predicts how varying the translation rate at different codon positions along a transcript's coding sequence affects this self-assembly process.
Plasma disruption prediction using machine learning methods: DIII-D
NASA Astrophysics Data System (ADS)
Lupin-Jimenez, L.; Kolemen, E.; Eldon, D.; Eidietis, N.
2016-10-01
Plasma disruption prediction is becoming more important with the development of larger tokamaks, due to the larger amount of thermal and magnetic energy that can be stored. By accurately predicting an impending disruption, the disruption's impact can be mitigated or, better, prevented. Recent approaches to disruption prediction have been through implementation of machine learning methods, which characterize raw and processed diagnostic data to develop accurate prediction models. Using disruption trials from the DIII-D database, the effectiveness of different machine learning methods are characterized. Developed real time disruption prediction approaches are focused on tearing and locking modes. Machine learning methods used include random forests, multilayer perceptrons, and traditional regression analysis. The algorithms are trained with data within short time frames, and whether or not a disruption occurs within the time window after the end of the frame. Initial results from the machine learning algorithms will be presented. Work supported by US DOE under the Science Undergraduate Laboratory Internship (SULI) program, DE-FC02-04ER54698, and DE-AC02-09CH11466.
Barron, M R; Roch, A M; Waters, J A; Parikh, J A; DeWitt, J M; Al-Haddad, M A; Ceppa, E P; House, M G; Zyromski, N J; Nakeeb, A; Pitt, H A; Schmidt, C Max
2014-03-01
Main pancreatic duct (MPD) involvement is a well-demonstrated risk factor for malignancy in intraductal papillary mucinous neoplasm (IPMN). Preoperative radiographic determination of IPMN type is heavily relied upon in oncologic risk stratification. We hypothesized that radiographic assessment of MPD involvement in IPMN is an accurate predictor of pathological MPD involvement. Data regarding all patients undergoing resection for IPMN at a single academic institution between 1992 and 2012 were gathered prospectively. Retrospective analysis of imaging and pathologic data was undertaken. Preoperative classification of IPMN type was based on cross-sectional imaging (MRI/magnetic resonance cholangiopancreatography (MRCP) and/or CT). Three hundred sixty-two patients underwent resection for IPMN. Of these, 334 had complete data for analysis. Of 164 suspected branch duct (BD) IPMN, 34 (20.7%) demonstrated MPD involvement on final pathology. Of 170 patients with suspicion of MPD involvement, 50 (29.4%) demonstrated no MPD involvement. Of 34 patients with suspected BD-IPMN who were found to have MPD involvement on pathology, 10 (29.4%) had invasive carcinoma. Alternatively, 2/50 (4%) of the patients with suspected MPD involvement who ultimately had isolated BD-IPMN demonstrated invasive carcinoma. Preoperative radiographic IPMN type did not correlate with final pathology in 25% of the patients. In addition, risk of invasive carcinoma correlates with pathologic presence of MPD involvement.
Wang, Zhiheng; Yang, Qianqian; Li, Tonghua; Cong, Peisheng
2015-01-01
The precise prediction of protein intrinsically disordered regions, which play a crucial role in biological procedures, is a necessary prerequisite to further the understanding of the principles and mechanisms of protein function. Here, we propose a novel predictor, DisoMCS, which is a more accurate predictor of protein intrinsically disordered regions. The DisoMCS bases on an original multi-class conservative score (MCS) obtained by sequence-order/disorder alignment. Initially, near-disorder regions are defined on fragments located at both the terminus of an ordered region connecting a disordered region. Then the multi-class conservative score is generated by sequence alignment against a known structure database and represented as order, near-disorder and disorder conservative scores. The MCS of each amino acid has three elements: order, near-disorder and disorder profiles. Finally, the MCS is exploited as features to identify disordered regions in sequences. DisoMCS utilizes a non-redundant data set as the training set, MCS and predicted secondary structure as features, and a conditional random field as the classification algorithm. In predicted near-disorder regions a residue is determined as an order or a disorder according to the optimized decision threshold. DisoMCS was evaluated by cross-validation, large-scale prediction, independent tests and CASP (Critical Assessment of Techniques for Protein Structure Prediction) tests. All results confirmed that DisoMCS was very competitive in terms of accuracy of prediction when compared with well-established publicly available disordered region predictors. It also indicated our approach was more accurate when a query has higher homologous with the knowledge database. Availability The DisoMCS is available at http://cal.tongji.edu.cn/disorder/. PMID:26090958
A high-order accurate embedded boundary method for first order hyperbolic equations
NASA Astrophysics Data System (ADS)
Mattsson, Ken; Almquist, Martin
2017-04-01
A stable and high-order accurate embedded boundary method for first order hyperbolic equations is derived. Where the grid-boundaries and the physical boundaries do not coincide, high order interpolation is used. The boundary stencils are based on a summation-by-parts framework, and the boundary conditions are imposed by the SAT penalty method, which guarantees linear stability for one-dimensional problems. Second-, fourth-, and sixth-order finite difference schemes are considered. The resulting schemes are fully explicit. Accuracy and numerical stability of the proposed schemes are demonstrated for both linear and nonlinear hyperbolic systems in one and two spatial dimensions.
NASA Astrophysics Data System (ADS)
Godin, T. J.; Haydock, Roger
1991-04-01
The Block Recursion Library, a collection of FORTRAN subroutines, calculates submatrices of the resolvent of a linear operator. The resolvent, in matrix theory, is a powerful tool for extracting information about solutions of linear systems. The routines use the block recursion method and achieve high accuracy for very large systems of coupled equations. This technique is a generalization of the scalar recursion method, an accurate technique for finding the local density of states. A sample program uses these routines to find the quantum mechanical transmittance of a randomly disordered two-dimensional cluster of atoms.
NASA Astrophysics Data System (ADS)
Ho, Kung-Chu; Su, Vin-Cent; Huang, Da-Yo; Lee, Ming-Lun; Chou, Nai-Kuan; Kuan, Chieh-Hsiung
2017-01-01
This paper reports the investigation of strong electrolytic solutions operated in low frequency regime through an accurate electrical impedance method realized with a specific microfluidic device and high resolution instruments. Experimental results show the better repeatability and accuracy of the proposed impedance method. Moreover, all electrolytic solutions appear the so-called relaxation frequency at each peak value of dielectric loss due to relaxing total polarization inside the device. The relaxation frequency of concentrated electrolytes becomes higher owing to the stronger total polarization behavior coming from the higher conductivity as well as the lower resistance in the electrolytic solutions.
Wang, Xue-Yong; Liao, Cai-Li; Liu, Si-Qi; Liu, Chun-Sheng; Shao, Ai-Juan; Huang, Lu-Qi
2013-05-01
This paper put forward a more accurate identification method for identification of Chinese materia medica (CMM), the systematic identification of Chinese materia medica (SICMM) , which might solve difficulties in CMM identification used the ordinary traditional ways. Concepts, mechanisms and methods of SICMM were systematically introduced and possibility was proved by experiments. The establishment of SICMM will solve problems in identification of Chinese materia medica not only in phenotypic characters like the mnorphous, microstructure, chemical constituents, but also further discovery evolution and classification of species, subspecies and population in medical plants. The establishment of SICMM will improve the development of identification of CMM and create a more extensive study space.
Hwang, Beomsoo; Jeon, Doyoung
2015-04-09
In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.
Deng, Yan; Zhou, Bin; Xing, Chao; Zhang, Rong
2014-10-17
A novel multifrequency excitation (MFE) method is proposed to realize rapid and accurate dynamic testing of micromachined gyroscope chips. Compared with the traditional sweep-frequency excitation (SFE) method, the computational time for testing one chip under four modes at a 1-Hz frequency resolution and 600-Hz bandwidth was dramatically reduced from 10 min to 6 s. A multifrequency signal with an equal amplitude and initial linear-phase-difference distribution was generated to ensure test repeatability and accuracy. The current test system based on LabVIEW using the SFE method was modified to use the MFE method without any hardware changes. The experimental results verified that the MFE method can be an ideal solution for large-scale dynamic testing of gyroscope chips and gyroscopes.
NASA Astrophysics Data System (ADS)
Sudhakar, Y.; Moitinho de Almeida, J. P.; Wall, Wolfgang A.
2014-09-01
We present an accurate method for the numerical integration of polynomials over arbitrary polyhedra. Using the divergence theorem, the method transforms the domain integral into integrals evaluated over the facets of the polyhedra. The necessity of performing symbolic computation during such transformation is eliminated by using one dimensional Gauss quadrature rule. The facet integrals are computed with the help of quadratures available for triangles and quadrilaterals. Numerical examples, in which the proposed method is used to integrate the weak form of the Navier-Stokes equations in an embedded interface method (EIM), are presented. The results show that our method is as accurate and generalized as the most widely used volume decomposition based methods. Moreover, since the method involves neither volume decomposition nor symbolic computations, it is much easier for computer implementation. Also, the present method is more efficient than other available integration methods based on the divergence theorem. Efficiency of the method is also compared with the volume decomposition based methods and moment fitting methods. To our knowledge, this is the first article that compares both accuracy and computational efficiency of methods relying on volume decomposition and those based on the divergence theorem.
Efficient Methods to Compute Genomic Predictions
Technology Transfer Automated Retrieval System (TEKTRAN)
Efficient methods for processing genomic data were developed to increase reliability of estimated breeding values and simultaneously estimate thousands of marker effects. Algorithms were derived and computer programs tested on simulated data for 50,000 markers and 2,967 bulls. Accurate estimates of ...
Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.
2008-10-20
One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic
Accurate Time/Frequency Transfer Method Using Bi-Directional WDM Transmission
NASA Technical Reports Server (NTRS)
Imaoka, Atsushi; Kihara, Masami
1996-01-01
An accurate time transfer method is proposed using b-directional wavelength division multiplexing (WDM) signal transmission along a single optical fiber. This method will be used in digital telecommunication networks and yield a time synchronization accuracy of better than 1 ns for long transmission lines over several tens of kilometers. The method can accurately measure the difference in delay between two wavelength signals caused by the chromatic dispersion of the fiber in conventional simple bi-directional dual-wavelength frequency transfer methods. We describe the characteristics of this difference in delay and then show that the accuracy of the delay measurements can be obtained below 0.1 ns by transmitting 156 Mb/s times reference signals of 1.31 micrometer and 1.55 micrometers along a 50 km fiber using the proposed method. The sub-nanosecond delay measurement using the simple bi-directional dual-wavelength transmission along a 100 km fiber with a wavelength spacing of 1 nm in the 1.55 micrometer range is also shown.
Comparison of prediction performance using statistical postprocessing methods
NASA Astrophysics Data System (ADS)
Han, Keunhee; Choi, JunTae; Kim, Chansoo
2016-11-01
As the 2018 Winter Olympics are to be held in Pyeongchang, both general weather information on Pyeongchang and specific weather information on this region, which can affect game operation and athletic performance, are required. An ensemble prediction system has been applied to provide more accurate weather information, but it has bias and dispersion due to the limitations and uncertainty of its model. In this study, homogeneous and nonhomogeneous regression models as well as Bayesian model averaging (BMA) were used to reduce the bias and dispersion existing in ensemble prediction and to provide probabilistic forecast. Prior to applying the prediction methods, reliability of the ensemble forecasts was tested by using a rank histogram and a residualquantile-quantile plot to identify the ensemble forecasts and the corresponding verifications. The ensemble forecasts had a consistent positive bias, indicating over-forecasting, and were under-dispersed. To correct such biases, statistical post-processing methods were applied using fixed and sliding windows. The prediction skills of methods were compared by using the mean absolute error, root mean square error, continuous ranked probability score, and continuous ranked probability skill score. Under the fixed window, BMA exhibited better prediction skill than the other methods in most observation station. Under the sliding window, on the other hand, homogeneous and non-homogeneous regression models with positive regression coefficients exhibited better prediction skill than BMA. In particular, the homogeneous regression model with positive regression coefficients exhibited the best prediction skill.
Fast and Accurate Accessible Surface Area Prediction Without a Sequence Profile.
Faraggi, Eshel; Kouza, Maksim; Zhou, Yaoqi; Kloczkowski, Andrzej
2017-01-01
A fast accessible surface area (ASA) predictor is presented. In this new approach no residue mutation profiles generated by multiple sequence alignments are used as inputs. Instead, we use only single sequence information and global features such as single-residue and two-residue compositions of the chain. The resulting predictor is both highly more efficient than sequence alignment based predictors and of comparable accuracy to them. Introduction of the global inputs significantly helps achieve this comparable accuracy. The predictor, termed ASAquick, is found to perform similarly well for so-called easy and hard cases indicating generalizability and possible usability for de-novo protein structure prediction. The source code and a Linux executables for ASAquick are available from Research and Information Systems at http://mamiris.com and from the Battelle Center for Mathematical Medicine at http://mathmed.org .
Sequence features accurately predict genome-wide MeCP2 binding in vivo
Rube, H. Tomas; Lee, Wooje; Hejna, Miroslav; Chen, Huaiyang; Yasui, Dag H.; Hess, John F.; LaSalle, Janine M.; Song, Jun S.; Gong, Qizhi
2016-01-01
Methyl-CpG binding protein 2 (MeCP2) is critical for proper brain development and expressed at near-histone levels in neurons, but the mechanism of its genomic localization remains poorly understood. Using high-resolution MeCP2-binding data, we show that DNA sequence features alone can predict binding with 88% accuracy. Integrating MeCP2 binding and DNA methylation in a probabilistic graphical model, we demonstrate that previously reported genome-wide association with methylation is in part due to MeCP2's affinity to GC-rich chromatin, a result replicated using published data. Furthermore, MeCP2 co-localizes with nucleosomes. Finally, MeCP2 binding downstream of promoters correlates with increased expression in Mecp2-deficient neurons. PMID:27008915
Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R
2016-01-25
Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in <1h compared to >3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required.
Resnic, F S; Ohno-Machado, L; Selwyn, A; Simon, D I; Popma, J J
2001-07-01
The objectives of this analysis were to develop and validate simplified risk score models for predicting the risk of major in-hospital complications after percutaneous coronary intervention (PCI) in the era of widespread stenting and use of glycoprotein IIb/IIIa antagonists. We then sought to compare the performance of these simplified models with those of full logistic regression and neural network models. From January 1, 1997 to December 31, 1999, data were collected on 4,264 consecutive interventional procedures at a single center. Risk score models were derived from multiple logistic regression models using the first 2,804 cases and then validated on the final 1,460 cases. The area under the receiver operating characteristic (ROC) curve for the risk score model that predicted death was 0.86 compared with 0.85 for the multiple logistic model and 0.83 for the neural network model (validation set). For the combined end points of death, myocardial infarction, or bypass surgery, the corresponding areas under the ROC curves were 0.74, 0.78, and 0.81, respectively. Previously identified risk factors were confirmed in this analysis. The use of stents was associated with a decreased risk of in-hospital complications. Thus, risk score models can accurately predict the risk of major in-hospital complications after PCI. Their discriminatory power is comparable to those of logistic models and neural network models. Accurate bedside risk stratification may be achieved with these simple models.
Accurate method for including solid-fluid boundary interactions in mesoscopic model fluids
Berkenbos, A. Lowe, C.P.
2008-04-20
Particle models are attractive methods for simulating the dynamics of complex mesoscopic fluids. Many practical applications of this methodology involve flow through a solid geometry. As the system is modeled using particles whose positions move continuously in space, one might expect that implementing the correct stick boundary condition exactly at the solid-fluid interface is straightforward. After all, unlike discrete methods there is no mapping onto a grid to contend with. In this article we describe a method that, for axisymmetric flows, imposes both the no-slip condition and continuity of stress at the interface. We show that the new method then accurately reproduces correct hydrodynamic behavior right up to the location of the interface. As such, computed flow profiles are correct even using a relatively small number of particles to model the fluid.
Accurate force fields and methods for modelling organic molecular crystals at finite temperatures.
Nyman, Jonas; Pundyke, Orla Sheehan; Day, Graeme M
2016-06-21
We present an assessment of the performance of several force fields for modelling intermolecular interactions in organic molecular crystals using the X23 benchmark set. The performance of the force fields is compared to several popular dispersion corrected density functional methods. In addition, we present our implementation of lattice vibrational free energy calculations in the quasi-harmonic approximation, using several methods to account for phonon dispersion. This allows us to also benchmark the force fields' reproduction of finite temperature crystal structures. The results demonstrate that anisotropic atom-atom multipole-based force fields can be as accurate as several popular DFT-D methods, but have errors 2-3 times larger than the current best DFT-D methods. The largest error in the examined force fields is a systematic underestimation of the (absolute) lattice energy.
Accurate Wind Characterization in Complex Terrain Using the Immersed Boundary Method
Lundquist, K A; Chow, F K; Lundquist, J K; Kosovic, B
2009-09-30
This paper describes an immersed boundary method (IBM) that facilitates the explicit resolution of complex terrain within the Weather Research and Forecasting (WRF) model. Two different interpolation methods, trilinear and inverse distance weighting, are used at the core of the IBM algorithm. Functional aspects of the algorithm's implementation and the accuracy of results are considered. Simulations of flow over a three-dimensional hill with shallow terrain slopes are preformed with both WRF's native terrain-following coordinate and with both IB methods. Comparisons of flow fields from the three simulations show excellent agreement, indicating that both IB methods produce accurate results. However, when ease of implementation is considered, inverse distance weighting is superior. Furthermore, inverse distance weighting is shown to be more adept at handling highly complex urban terrain, where the trilinear interpolation algorithm breaks down. This capability is demonstrated by using the inverse distance weighting core of the IBM to model atmospheric flow in downtown Oklahoma City.
Integrative subcellular proteomic analysis allows accurate prediction of human disease-causing genes
Zhao, Li; Chen, Yiyun; Bajaj, Amol Onkar; Eblimit, Aiden; Xu, Mingchu; Soens, Zachry T.; Wang, Feng; Ge, Zhongqi; Jung, Sung Yun; He, Feng; Li, Yumei; Wensel, Theodore G.; Qin, Jun; Chen, Rui
2016-01-01
Proteomic profiling on subcellular fractions provides invaluable information regarding both protein abundance and subcellular localization. When integrated with other data sets, it can greatly enhance our ability to predict gene function genome-wide. In this study, we performed a comprehensive proteomic analysis on the light-sensing compartment of photoreceptors called the outer segment (OS). By comparing with the protein profile obtained from the retina tissue depleted of OS, an enrichment score for each protein is calculated to quantify protein subcellular localization, and 84% accuracy is achieved compared with experimental data. By integrating the protein OS enrichment score, the protein abundance, and the retina transcriptome, the probability of a gene playing an essential function in photoreceptor cells is derived with high specificity and sensitivity. As a result, a list of genes that will likely result in human retinal disease when mutated was identified and validated by previous literature and/or animal model studies. Therefore, this new methodology demonstrates the synergy of combining subcellular fractionation proteomics with other omics data sets and is generally applicable to other tissues and diseases. PMID:26912414
Accurate prediction of the refractive index of polymers using first principles and data modeling
NASA Astrophysics Data System (ADS)
Afzal, Mohammad Atif Faiz; Cheng, Chong; Hachmann, Johannes
Organic polymers with a high refractive index (RI) have recently attracted considerable interest due to their potential application in optical and optoelectronic devices. The ability to tailor the molecular structure of polymers is the key to increasing the accessible RI values. Our work concerns the creation of predictive in silico models for the optical properties of organic polymers, the screening of large-scale candidate libraries, and the mining of the resulting data to extract the underlying design principles that govern their performance. This work was set up to guide our experimentalist partners and allow them to target the most promising candidates. Our model is based on the Lorentz-Lorenz equation and thus includes the polarizability and number density values for each candidate. For the former, we performed a detailed benchmark study of different density functionals, basis sets, and the extrapolation scheme towards the polymer limit. For the number density we devised an exceedingly efficient machine learning approach to correlate the polymer structure and the packing fraction in the bulk material. We validated the proposed RI model against the experimentally known RI values of 112 polymers. We could show that the proposed combination of physical and data modeling is both successful and highly economical to characterize a wide range of organic polymers, which is a prerequisite for virtual high-throughput screening.
Towards Relaxing the Spherical Solar Radiation Pressure Model for Accurate Orbit Predictions
NASA Astrophysics Data System (ADS)
Lachut, M.; Bennett, J.
2016-09-01
The well-known cannonball model has been used ubiquitously to capture the effects of atmospheric drag and solar radiation pressure on satellites and/or space debris for decades. While it lends itself naturally to spherical objects, its validity in the case of non-spherical objects has been debated heavily for years throughout the space situational awareness community. One of the leading motivations to improve orbit predictions by relaxing the spherical assumption, is the ongoing demand for more robust and reliable conjunction assessments. In this study, we explore the orbit propagation of a flat plate in a near-GEO orbit under the influence of solar radiation pressure, using a Lambertian BRDF model. Consequently, this approach will account for the spin rate and orientation of the object, which is typically determined in practice using a light curve analysis. Here, simulations will be performed which systematically reduces the spin rate to demonstrate the point at which the spherical model no longer describes the orbital elements of the spinning plate. Further understanding of this threshold would provide insight into when a higher fidelity model should be used, thus resulting in improved orbit propagations. Therefore, the work presented here is of particular interest to organizations and researchers that maintain their own catalog, and/or perform conjunction analyses.
Towards Accurate Prediction of Turbulent, Three-Dimensional, Recirculating Flows with the NCC
NASA Technical Reports Server (NTRS)
Iannetti, A.; Tacina, R.; Jeng, S.-M.; Cai, J.
2001-01-01
The National Combustion Code (NCC) was used to calculate the steady state, nonreacting flow field of a prototype Lean Direct Injection (LDI) swirler. This configuration used nine groups of eight holes drilled at a thirty-five degree angle to induce swirl. These nine groups created swirl in the same direction, or a corotating pattern. The static pressure drop across the holes was fixed at approximately four percent. Computations were performed on one quarter of the geometry, because the geometry is considered rotationally periodic every ninety degrees. The final computational grid used was approximately 2.26 million tetrahedral cells, and a cubic nonlinear k - epsilon model was used to model turbulence. The NCC results were then compared to time averaged Laser Doppler Velocimetry (LDV) data. The LDV measurements were performed on the full geometry, but four ninths of the geometry was measured. One-, two-, and three-dimensional representations of both flow fields are presented. The NCC computations compare both qualitatively and quantitatively well to the LDV data, but differences exist downstream. The comparison is encouraging, and shows that NCC can be used for future injector design studies. To improve the flow prediction accuracy of turbulent, three-dimensional, recirculating flow fields with the NCC, recommendations are given.
NASA Technical Reports Server (NTRS)
Kory, Carol L.
1999-01-01
The phenomenal growth of commercial communications has created a great demand for traveling-wave tube (TWT) amplifiers. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. For the first time, an accurate three-dimensional helical model was developed that allows accurate prediction of TWT cold-test characteristics including operating frequency, interaction impedance, and attenuation. This computational model, which was developed at the NASA Lewis Research Center, allows TWT designers to obtain a more accurate value of interaction impedance than is possible using experimental methods. Obtaining helical slow-wave circuit interaction impedance is an important part of the design process for a TWT because it is related to the gain and efficiency of the tube. This impedance cannot be measured directly; thus, conventional methods involve perturbing a helical circuit with a cylindrical dielectric rod placed on the central axis of the circuit and obtaining the difference in resonant frequency between the perturbed and unperturbed circuits. A mathematical relationship has been derived between this frequency difference and the interaction impedance (ref. 1). However, because of the complex configuration of the helical circuit, deriving this relationship involves several approximations. In addition, this experimental procedure is time-consuming and expensive, but until recently it was widely accepted as the most accurate means of determining interaction impedance. The advent of an accurate three-dimensional helical circuit model (ref. 2) made it possible for Lewis researchers to fully investigate standard approximations made in deriving the relationship between measured perturbation data and interaction impedance. The most prominent approximations made
A new class of accurate, mesh-free hydrodynamic simulation methods
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.
2015-06-01
We present two new Lagrangian methods for hydrodynamics, in a systematic comparison with moving-mesh, smoothed particle hydrodynamics (SPH), and stationary (non-moving) grid methods. The new methods are designed to simultaneously capture advantages of both SPH and grid-based/adaptive mesh refinement (AMR) schemes. They are based on a kernel discretization of the volume coupled to a high-order matrix gradient estimator and a Riemann solver acting over the volume `overlap'. We implement and test a parallel, second-order version of the method with self-gravity and cosmological integration, in the code GIZMO:1 this maintains exact mass, energy and momentum conservation; exhibits superior angular momentum conservation compared to all other methods we study; does not require `artificial diffusion' terms; and allows the fluid elements to move with the flow, so resolution is automatically adaptive. We consider a large suite of test problems, and find that on all problems the new methods appear competitive with moving-mesh schemes, with some advantages (particularly in angular momentum conservation), at the cost of enhanced noise. The new methods have many advantages versus SPH: proper convergence, good capturing of fluid-mixing instabilities, dramatically reduced `particle noise' and numerical viscosity, more accurate sub-sonic flow evolution, and sharp shock-capturing. Advantages versus non-moving meshes include: automatic adaptivity, dramatically reduced advection errors and numerical overmixing, velocity-independent errors, accurate coupling to gravity, good angular momentum conservation and elimination of `grid alignment' effects. We can, for example, follow hundreds of orbits of gaseous discs, while AMR and SPH methods break down in a few orbits. However, fixed meshes minimize `grid noise'. These differences are important for a range of astrophysical problems.
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667
NASA Astrophysics Data System (ADS)
Simmons, Daniel; Cools, Kristof; Sewell, Phillip
2016-11-01
Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removes staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications.
Method for accurate optical alignment using diffraction rings from lenses with spherical aberration.
Gwynn, R B; Christensen, D A
1993-03-01
A useful alignment method is presented that exploits the closely spaced concentric fringes that form in the longitudinal spherical aberration region of positive spherical lenses imaging a point source. To align one or more elements to a common axis, spherical lenses are attached precisely to the elements and the resulting diffraction rings are made to coincide. We modeled the spherical aberration of the lenses by calculating the diffraction patterns of converging plane waves passing through concentric narrow annular apertures. The validity of the model is supported by experimental data and is determined to be accurate for a prototype penumbral imaging alignment system developed at Lawrence Livermore National Laboratory.
Efficient Unstructured Grid Adaptation Methods for Sonic Boom Prediction
NASA Technical Reports Server (NTRS)
Campbell, Richard L.; Carter, Melissa B.; Deere, Karen A.; Waithe, Kenrick A.
2008-01-01
This paper examines the use of two grid adaptation methods to improve the accuracy of the near-to-mid field pressure signature prediction of supersonic aircraft computed using the USM3D unstructured grid flow solver. The first method (ADV) is an interactive adaptation process that uses grid movement rather than enrichment to more accurately resolve the expansion and compression waves. The second method (SSGRID) uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid with the pressure waves and reduce the cell count required to achieve an accurate signature prediction at a given distance from the vehicle. Both methods initially create negative volume cells that are repaired in a module in the ADV code. While both approaches provide significant improvements in the near field signature (< 3 body lengths) relative to a baseline grid without increasing the number of grid points, only the SSGRID approach allows the details of the signature to be accurately computed at mid-field distances (3-10 body lengths) for direct use with mid-field-to-ground boom propagation codes.
A simple and accurate resist parameter extraction method for sub-80-nm DRAM patterns
NASA Astrophysics Data System (ADS)
Lee, Sook; Hwang, Chan; Park, Dong-Woon; Kim, In-Sung; Kim, Ho-Chul; Woo, Sang-Gyun; Cho, Han-Ku; Moon, Joo-Tae
2004-05-01
Due to the polarization effect of high NA lithography, the consideration of resist effect in lithography simulation becomes increasingly important. In spite of the importance of resist simulation, many process engineers are reluctant to consider resist effect in lithography simulation due to time-consuming procedure to extract required resist parameters and the uncertainty of measurement of some parameters. Weiss suggested simplified development model, and this model does not require the complex kinetic parameters. For the device fabrication engineers, there is a simple and accurate parameter extraction and optimizing method using Weiss model. This method needs refractive index, Dill"s parameters and development rate monitoring (DRM) data in parameter extraction. The parameters extracted using referred sequence is not accurate, so that we have to optimize the parameters to fit the critical dimension scanning electron microscopy (CD SEM) data of line and space patterns. Hence, the FiRM of Sigma-C is utilized as a resist parameter-optimizing program. According to our study, the illumination shape, the aberration and the pupil mesh point have a large effect on the accuracy of resist parameter in optimization. To obtain the optimum parameters, we need to find the saturated mesh points in terms of normalized intensity log slope (NILS) prior to an optimization. The simulation results using the optimized parameters by this method shows good agreement with experiments for iso-dense bias, Focus-Exposure Matrix data and sub 80nm device pattern simulation.
Joint iris boundary detection and fit: a real-time method for accurate pupil tracking.
Barbosa, Marconi; James, Andrew C
2014-08-01
A range of applications in visual science rely on accurate tracking of the human pupil's movement and contraction in response to light. While the literature for independent contour detection and fitting of the iris-pupil boundary is vast, a joint approach, in which it is assumed that the pupil has a given geometric shape has been largely overlooked. We present here a global method for simultaneously finding and fitting of an elliptic or circular contour against a dark interior, which produces consistently accurate results even under non-ideal recording conditions, such as reflections near and over the boundary, droopy eye lids, or the sudden formation of tears. The specific form of the proposed optimization problem allows us to write down closed analytic formulae for the gradient and the Hessian of the objective function. Moreover, both the objective function and its derivatives can be cast into vectorized form, making the proposed algorithm significantly faster than its closest relative in the literature. We compare methods in multiple ways, both analytically and numerically, using real iris images as well as idealizations of the iris for which the ground truth boundary is precisely known. The method proposed here is illustrated under challenging recording conditions and it is shown to be robust.
NASA Technical Reports Server (NTRS)
Kim, Hyoungin; Liou, Meng-Sing
2011-01-01
In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems
Joint iris boundary detection and fit: a real-time method for accurate pupil tracking
Barbosa, Marconi; James, Andrew C.
2014-01-01
A range of applications in visual science rely on accurate tracking of the human pupil’s movement and contraction in response to light. While the literature for independent contour detection and fitting of the iris-pupil boundary is vast, a joint approach, in which it is assumed that the pupil has a given geometric shape has been largely overlooked. We present here a global method for simultaneously finding and fitting of an elliptic or circular contour against a dark interior, which produces consistently accurate results even under non-ideal recording conditions, such as reflections near and over the boundary, droopy eye lids, or the sudden formation of tears. The specific form of the proposed optimization problem allows us to write down closed analytic formulae for the gradient and the Hessian of the objective function. Moreover, both the objective function and its derivatives can be cast into vectorized form, making the proposed algorithm significantly faster than its closest relative in the literature. We compare methods in multiple ways, both analytically and numerically, using real iris images as well as idealizations of the iris for which the ground truth boundary is precisely known. The method proposed here is illustrated under challenging recording conditions and it is shown to be robust. PMID:25136477
Grassi, Lorenzo; Väänänen, Sami P; Ristinmaa, Matti; Jurvelin, Jukka S; Isaksson, Hanna
2016-03-21
Subject-specific finite element models have been proposed as a tool to improve fracture risk assessment in individuals. A thorough laboratory validation against experimental data is required before introducing such models in clinical practice. Results from digital image correlation can provide full-field strain distribution over the specimen surface during in vitro test, instead of at a few pre-defined locations as with strain gauges. The aim of this study was to validate finite element models of human femora against experimental data from three cadaver femora, both in terms of femoral strength and of the full-field strain distribution collected with digital image correlation. The results showed a high accuracy between predicted and measured principal strains (R(2)=0.93, RMSE=10%, 1600 validated data points per specimen). Femoral strength was predicted using a rate dependent material model with specific strain limit values for yield and failure. This provided an accurate prediction (<2% error) for two out of three specimens. In the third specimen, an accidental change in the boundary conditions occurred during the experiment, which compromised the femoral strength validation. The achieved strain accuracy was comparable to that obtained in state-of-the-art studies which validated their prediction accuracy against 10-16 strain gauge measurements. Fracture force was accurately predicted, with the predicted failure location being very close to the experimental fracture rim. Despite the low sample size and the single loading condition tested, the present combined numerical-experimental method showed that finite element models can predict femoral strength by providing a thorough description of the local bone mechanical response.
Development of Improved Surface Integral Methods for Jet Aeroacoustic Predictions
NASA Technical Reports Server (NTRS)
Pilon, Anthony R.; Lyrintzis, Anastasios S.
1997-01-01
The accurate prediction of aerodynamically generated noise has become an important goal over the past decade. Aeroacoustics must now be an integral part of the aircraft design process. The direct calculation of aerodynamically generated noise with CFD-like algorithms is plausible. However, large computer time and memory requirements often make these predictions impractical. It is therefore necessary to separate the aeroacoustics problem into two parts, one in which aerodynamic sound sources are determined, and another in which the propagating sound is calculated. This idea is applied in acoustic analogy methods. However, in the acoustic analogy, the determination of far-field sound requires the solution of a volume integral. This volume integration again leads to impractical computer requirements. An alternative to the volume integrations can be found in the Kirchhoff method. In this method, Green's theorem for the linear wave equation is used to determine sound propagation based on quantities on a surface surrounding the source region. The change from volume to surface integrals represents a tremendous savings in the computer resources required for an accurate prediction. This work is concerned with the development of enhancements of the Kirchhoff method for use in a wide variety of aeroacoustics problems. This enhanced method, the modified Kirchhoff method, is shown to be a Green's function solution of Lighthill's equation. It is also shown rigorously to be identical to the methods of Ffowcs Williams and Hawkings. This allows for development of versatile computer codes which can easily alternate between the different Kirchhoff and Ffowcs Williams-Hawkings formulations, using the most appropriate method for the problem at hand. The modified Kirchhoff method is developed primarily for use in jet aeroacoustics predictions. Applications of the method are shown for two dimensional and three dimensional jet flows. Additionally, the enhancements are generalized so that
NASA Astrophysics Data System (ADS)
Emmrich, Etienne; Thalhammer, Mechthild
2010-04-01
Stiffly accurate implicit Runge-Kutta methods are studied for the time discretisation of nonlinear first-order evolution equations. The equation is supposed to be governed by a time-dependent hemicontinuous operator that is (up to a shift) monotone and coercive, and fulfills a certain growth condition. It is proven that the piecewise constant as well as the piecewise linear interpolant of the time-discrete solution converges towards the exact weak solution, provided the Runge-Kutta method is consistent and satisfies a stability criterion that implies algebraic stability; examples are the Radau IIA and Lobatto IIIC methods. The convergence analysis is also extended to problems involving a strongly continuous perturbation of the monotone main part.
Predicting recreational water quality advisories: A comparison of statistical methods
Brooks, Wesley R.; Corsi, Steven R.; Fienen, Michael N.; Carvin, Rebecca B.
2016-01-01
Epidemiological studies indicate that fecal indicator bacteria (FIB) in beach water are associated with illnesses among people having contact with the water. In order to mitigate public health impacts, many beaches are posted with an advisory when the concentration of FIB exceeds a beach action value. The most commonly used method of measuring FIB concentration takes 18–24 h before returning a result. In order to avoid the 24 h lag, it has become common to ”nowcast” the FIB concentration using statistical regressions on environmental surrogate variables. Most commonly, nowcast models are estimated using ordinary least squares regression, but other regression methods from the statistical and machine learning literature are sometimes used. This study compares 14 regression methods across 7 Wisconsin beaches to identify which consistently produces the most accurate predictions. A random forest model is identified as the most accurate, followed by multiple regression fit using the adaptive LASSO.
Zhang, Hui; Li, Wei; Xie, Yang; Wang, Wen-Jing; Li, Lin-Li; Yang, Sheng-Yong
2011-12-01
Drug-induced seizures are a serious adverse effect and assessment of seizure risk usually takes place at the late stage of drug discovery process, which does not allow sufficient time to reduce the risk by chemical modification. Thus early identification of chemicals with seizure liability using rapid and cheaper approaches would be preferable. In this study, an optimal support vector machine (SVM) modeling method has been employed to develop a prediction model of seizure liability of chemicals. A set of 680 compounds were used to train the SVM model. The established SVM model was then validated by an independent test set comprising 175 compounds, which gave a prediction accuracy of 86.9%. Further, the SVM-based prediction model of seizure liability was compared with various preclinical seizure assays, including in vitro rat hippocampal brain slice, in vivo zebrafish larvae assay, mouse spontaneous seizure model, and mouse EEG model. In terms of predictability, the SVM model was ranked just behind the mouse EEG model, but better than the rat brain slice and zebrafish models. Nevertheless, the SVM model has considerable advantages compared with the preclinical seizure assays in speed and cost. In summary, the SVM-based prediction model of seizure liability established here offers potential as a cheaper, rapid and accurate assessment of seizure liability of drugs, which could be used in the seizure risk assessment at the early stage of drug discovery. The prediction model is freely available online at http://www.sklb.scu.edu.cn/lab/yangsy/download/ADMET/seizure_pred.tar.
Stable and accurate difference methods for seismic wave propagation on locally refined meshes
NASA Astrophysics Data System (ADS)
Petersson, A.; Rodgers, A.; Nilsson, S.; Sjogreen, B.; McCandless, K.
2006-12-01
To overcome some of the shortcomings of previous numerical methods for the elastic wave equation subject to stress-free boundary conditions, we are incorporating recent results from numerical analysis to develop a new finite difference method which discretizes the governing equations in second order displacement formulation. The most challenging aspect of finite difference methods for time dependent hyperbolic problems is clearly stability and some previous methods are known to be unstable when the material has a compressional velocity which exceeds about three times the shear velocity. Since the material properties in seismic applications often vary rapidly on the computational grid, the most straight forward approach for guaranteeing stability is through an energy estimate. For a hyperbolic system in second order formulation, the key to an energy estimate is a spatial discretization which is self-adjoint, i.e. corresponds to a symmetric or symmetrizable matrix. At the same time we want the scheme to be efficient and fully explicit, so only local operations are necessary to evolve the solution in the interior of the domain as well as on the free-surface boundary. Furthermore, we want the solution to be accurate when the data is smooth. Using these specifications, we developed an explicit second order accurate discretization where stability is guaranteed through an energy estimate for all ratios Cp/Cs. An implementation of our finite difference method was used to simulate ground motions during the 1906 San Francisco earthquake on a uniform grid with grid sizes down to 100 meters corresponding to over 4 Billion grid points. These simulations were run on 1024 processors on one of the supercomputers at Lawrence Livermore National Lab. To reduce the computational requirements for these simulations, we are currently extending the numerical method to use a locally refined mesh where the mesh size approximately follows the velocity structure in the domain. Some
Ballester, Pedro J; Schreyer, Adrian; Blundell, Tom L
2014-03-24
Predicting the binding affinities of large sets of diverse molecules against a range of macromolecular targets is an extremely challenging task. The scoring functions that attempt such computational prediction are essential for exploiting and analyzing the outputs of docking, which is in turn an important tool in problems such as structure-based drug design. Classical scoring functions assume a predetermined theory-inspired functional form for the relationship between the variables that describe an experimentally determined or modeled structure of a protein-ligand complex and its binding affinity. The inherent problem of this approach is in the difficulty of explicitly modeling the various contributions of intermolecular interactions to binding affinity. New scoring functions based on machine-learning regression models, which are able to exploit effectively much larger amounts of experimental data and circumvent the need for a predetermined functional form, have already been shown to outperform a broad range of state-of-the-art scoring functions in a widely used benchmark. Here, we investigate the impact of the chemical description of the complex on the predictive power of the resulting scoring function using a systematic battery of numerical experiments. The latter resulted in the most accurate scoring function to date on the benchmark. Strikingly, we also found that a more precise chemical description of the protein-ligand complex does not generally lead to a more accurate prediction of binding affinity. We discuss four factors that may contribute to this result: modeling assumptions, codependence of representation and regression, data restricted to the bound state, and conformational heterogeneity in data.
Hierarchical Ensemble Methods for Protein Function Prediction
2014-01-01
Protein function prediction is a complex multiclass multilabel classification problem, characterized by multiple issues such as the incompleteness of the available annotations, the integration of multiple sources of high dimensional biomolecular data, the unbalance of several functional classes, and the difficulty of univocally determining negative examples. Moreover, the hierarchical relationships between functional classes that characterize both the Gene Ontology and FunCat taxonomies motivate the development of hierarchy-aware prediction methods that showed significantly better performances than hierarchical-unaware “flat” prediction methods. In this paper, we provide a comprehensive review of hierarchical methods for protein function prediction based on ensembles of learning machines. According to this general approach, a separate learning machine is trained to learn a specific functional term and then the resulting predictions are assembled in a “consensus” ensemble decision, taking into account the hierarchical relationships between classes. The main hierarchical ensemble methods proposed in the literature are discussed in the context of existing computational methods for protein function prediction, highlighting their characteristics, advantages, and limitations. Open problems of this exciting research area of computational biology are finally considered, outlining novel perspectives for future research. PMID:25937954
High accuracy operon prediction method based on STRING database scores.
Taboada, Blanca; Verde, Cristina; Merino, Enrique
2010-07-01
We present a simple and highly accurate computational method for operon prediction, based on intergenic distances and functional relationships between the protein products of contiguous genes, as defined by STRING database (Jensen,L.J., Kuhn,M., Stark,M., Chaffron,S., Creevey,C., Muller,J., Doerks,T., Julien,P., Roth,A., Simonovic,M. et al. (2009) STRING 8-a global view on proteins and their functional interactions in 630 organisms. Nucleic Acids Res., 37, D412-D416). These two parameters were used to train a neural network on a subset of experimentally characterized Escherichia coli and Bacillus subtilis operons. Our predictive model was successfully tested on the set of experimentally defined operons in E. coli and B. subtilis, with accuracies of 94.6 and 93.3%, respectively. As far as we know, these are the highest accuracies ever obtained for predicting bacterial operons. Furthermore, in order to evaluate the predictable accuracy of our model when using an organism's data set for the training procedure, and a different organism's data set for testing, we repeated the E. coli operon prediction analysis using a neural network trained with B. subtilis data, and a B. subtilis analysis using a neural network trained with E. coli data. Even for these cases, the accuracies reached with our method were outstandingly high, 91.5 and 93%, respectively. These results show the potential use of our method for accurately predicting the operons of any other organism. Our operon predictions for fully-sequenced genomes are available at http://operons.ibt.unam.mx/OperonPredictor/.
Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan
2014-08-14
In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.
Hourihan, Kathleen L.; Benjamin, Aaron S.; Liu, Xiping
2012-01-01
The Cross-Race Effect (CRE) in face recognition is the well-replicated finding that people are better at recognizing faces from their own race, relative to other races. The CRE reveals systematic limitations on eyewitness identification accuracy and suggests that some caution is warranted in evaluating cross-race identification. The CRE is a problem because jurors value eyewitness identification highly in verdict decisions. In the present paper, we explore how accurate people are in predicting their ability to recognize own-race and other-race faces. Caucasian and Asian participants viewed photographs of Caucasian and Asian faces, and made immediate judgments of learning during study. An old/new recognition test replicated the CRE: both groups displayed superior discriminability of own-race faces, relative to other-race faces. Importantly, relative metamnemonic accuracy was also greater for own-race faces, indicating that the accuracy of predictions about face recognition is influenced by race. This result indicates another source of concern when eliciting or evaluating eyewitness identification: people are less accurate in judging whether they will or will not recognize a face when that face is of a different race than they are. This new result suggests that a witness’s claim of being likely to recognize a suspect from a lineup should be interpreted with caution when the suspect is of a different race than the witness. PMID:23162788
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed.
Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262
Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.
Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.
A stationary wavelet entropy-based clustering approach accurately predicts gene expression.
Nguyen, Nha; Vo, An; Choi, Inchan; Won, Kyoung-Jae
2015-03-01
Studying epigenetic landscapes is important to understand the condition for gene regulation. Clustering is a useful approach to study epigenetic landscapes by grouping genes based on their epigenetic conditions. However, classical clustering approaches that often use a representative value of the signals in a fixed-sized window do not fully use the information written in the epigenetic landscapes. Clustering approaches to maximize the information of the epigenetic signals are necessary for better understanding gene regulatory environments. For effective clustering of multidimensional epigenetic signals, we developed a method called Dewer, which uses the entropy of stationary wavelet of epigenetic signals inside enriched regions for gene clustering. Interestingly, the gene expression levels were highly correlated with the entropy levels of epigenetic signals. Dewer separates genes better than a window-based approach in the assessment using gene expression and achieved a correlation coefficient above 0.9 without using any training procedure. Our results show that the changes of the epigenetic signals are useful to study gene regulation.
Wills, John M; Mattsson, Ann E
2012-06-06
Brooks, Johansson, and Skriver, using the LMTO-ASA method and considerable insight, were able to explain many of the ground state properties of the actinides. In the many years since this work was done, electronic structure calculations of increasing sophistication have been applied to actinide elements and compounds, attempting to quantify the applicability of DFT to actinides and actinide compounds and to try to incorporate other methodologies (i.e. DMFT) into DFT calculations. Through these calculations, the limits of both available density functionals and ad hoc methodologies are starting to become clear. However, it has also become clear that approximations used to incorporate relativity are not adequate to provide rigorous tests of the underlying equations of DFT, not to mention ad hoc additions. In this talk, we describe the result of full-potential LMTO calculations for the elemental actinides, comparing results obtained with a full Dirac basis with those obtained from scalar-relativistic bases, with and without variational spin-orbit. This comparison shows that the scalar relativistic treatment of actinides does not have sufficient accuracy to provide a rigorous test of theory and that variational spin-orbit introduces uncontrolled errors in the results of electronic structure calculations on actinide elements.
Correa da Rosa, Joel; Kim, Jaehwan; Tian, Suyan; Tomalin, Lewis E; Krueger, James G; Suárez-Fariñas, Mayte
2017-02-01
There is an "assessment gap" between the moment a patient's response to treatment is biologically determined and when a response can actually be determined clinically. Patients' biochemical profiles are a major determinant of clinical outcome for a given treatment. It is therefore feasible that molecular-level patient information could be used to decrease the assessment gap. Thanks to clinically accessible biopsy samples, high-quality molecular data for psoriasis patients are widely available. Psoriasis is therefore an excellent disease for testing the prospect of predicting treatment outcome from molecular data. Our study shows that gene-expression profiles of psoriasis skin lesions, taken in the first 4 weeks of treatment, can be used to accurately predict (>80% area under the receiver operating characteristic curve) the clinical endpoint at 12 weeks. This could decrease the psoriasis assessment gap by 2 months. We present two distinct prediction modes: a universal predictor, aimed at forecasting the efficacy of untested drugs, and specific predictors aimed at forecasting clinical response to treatment with four specific drugs: etanercept, ustekinumab, adalimumab, and methotrexate. We also develop two forms of prediction: one from detailed, platform-specific data and one from platform-independent, pathway-based data. We show that key biomarkers are associated with responses to drugs and doses and thus provide insight into the biology of pathogenesis reversion.
Novel method for accurate g measurements in electron-spin resonance
NASA Astrophysics Data System (ADS)
Stesmans, A.; Van Gorp, G.
1989-09-01
In high-accuracy work, electron-spin-resonance (ESR) g values are generally determined by calibrating against the accurately known proton nuclear magnetic resonance (NMR). For that method—based on leakage of microwave energy out of the ESR cavity—a convenient technique is presented to obtain accurate g values without needing conscientious precalibration procedures or cumbersome constructions. As main advantages, the method allows the easy monitoring of the positioning of the ESR and NMR samples while they are mounted as close as physically realizable at all time during their simultaneous resonances. Relative accuracies on g of ≊2×10-6 are easily achieved for ESR signals of peak-to-peak width ΔBpp≲0.3 G. The method has been applied to calibrate the g value of conduction electrons of small Li particles embedded in LiF—a frequently used g marker—resulting in gLiF: Li=2.002 293±0.000 002.
Vent-Schmidt, Jens; Waltz, Xavier; Pichon, Aurélien; Hardy-Dessources, Marie-Dominique; Romana, Marc; Connes, Philippe
2015-01-01
The aim of this study was to test the accuracy of viscosimetric method to estimate the red blood cell (RBC) deformability properties. Thirty-three subjects were enrolled in this study: 6 healthy subjects (AA), 11 patients with sickle cell-hemoglobin C disease (SC) and 16 patients with sickle cell anemia (SS). Two methods were used to assess RBC deformability: 1) indirect viscosimetric method and 2) ektacytometry. The indirect viscosimetric method was based on the Dintenfass equation where blood viscosity, plasma viscosity and hematocrit are measured and used to calculate an index of RBC rigidity (Tk index). The RBC deformability/rigidity of the three groups was compared using the two methods. Tk index was not different between SS and SC patients and the two groups had higher values than AA group. When ektacytometry was used, RBC deformability was lower in SS and SC groups compared to the AA group and SS and SC patients were different. Although the two measures of RBC deformability were correlated, the association was not very high. Bland and Altman analysis demonstrated a 3.25 bias suggesting a slight difference between the two methods. In addition, the limit of agreement represented 28% (>15%) of the mean values of RBC deformability, showing no interchangeability between the two methods. In conclusion, measuring RBC deformability by indirect viscosimetry is less accurate than by ektacytometry, which is considered the gold standard.
An adaptive, formally second order accurate version of the immersed boundary method
NASA Astrophysics Data System (ADS)
Griffith, Boyce E.; Hornung, Richard D.; McQueen, David M.; Peskin, Charles S.
2007-04-01
Like many problems in biofluid mechanics, cardiac mechanics can be modeled as the dynamic interaction of a viscous incompressible fluid (the blood) and a (visco-)elastic structure (the muscular walls and the valves of the heart). The immersed boundary method is a mathematical formulation and numerical approach to such problems that was originally introduced to study blood flow through heart valves, and extensions of this work have yielded a three-dimensional model of the heart and great vessels. In the present work, we introduce a new adaptive version of the immersed boundary method. This adaptive scheme employs the same hierarchical structured grid approach (but a different numerical scheme) as the two-dimensional adaptive immersed boundary method of Roma et al. [A multilevel self adaptive version of the immersed boundary method, Ph.D. Thesis, Courant Institute of Mathematical Sciences, New York University, 1996; An adaptive version of the immersed boundary method, J. Comput. Phys. 153 (2) (1999) 509-534] and is based on a formally second order accurate (i.e., second order accurate for problems with sufficiently smooth solutions) version of the immersed boundary method that we have recently described [B.E. Griffith, C.S. Peskin, On the order of accuracy of the immersed boundary method: higher order convergence rates for sufficiently smooth problems, J. Comput. Phys. 208 (1) (2005) 75-105]. Actual second order convergence rates are obtained for both the uniform and adaptive methods by considering the interaction of a viscous incompressible flow and an anisotropic incompressible viscoelastic shell. We also present initial results from the application of this methodology to the three-dimensional simulation of blood flow in the heart and great vessels. The results obtained by the adaptive method show good qualitative agreement with simulation results obtained by earlier non-adaptive versions of the method, but the flow in the vicinity of the model heart valves
NIBBS-search for fast and accurate prediction of phenotype-biased metabolic systems.
Schmidt, Matthew C; Rocha, Andrea M; Padmanabhan, Kanchana; Shpanskaya, Yekaterina; Banfield, Jill; Scott, Kathleen; Mihelcic, James R; Samatova, Nagiza F
2012-01-01
Understanding of genotype-phenotype associations is important not only for furthering our knowledge on internal cellular processes, but also essential for providing the foundation necessary for genetic engineering of microorganisms for industrial use (e.g., production of bioenergy or biofuels). However, genotype-phenotype associations alone do not provide enough information to alter an organism's genome to either suppress or exhibit a phenotype. It is important to look at the phenotype-related genes in the context of the genome-scale network to understand how the genes interact with other genes in the organism. Identification of metabolic subsystems involved in the expression of the phenotype is one way of placing the phenotype-related genes in the context of the entire network. A metabolic system refers to a metabolic network subgraph; nodes are compounds and edges labels are the enzymes that catalyze the reaction. The metabolic subsystem could be part of a single metabolic pathway or span parts of multiple pathways. Arguably, comparative genome-scale metabolic network analysis is a promising strategy to identify these phenotype-related metabolic subsystems. Network Instance-Based Biased Subgraph Search (NIBBS) is a graph-theoretic method for genome-scale metabolic network comparative analysis that can identify metabolic systems that are statistically biased toward phenotype-expressing organismal networks. We set up experiments with target phenotypes like hydrogen production, TCA expression, and acid-tolerance. We show via extensive literature search that some of the resulting metabolic subsystems are indeed phenotype-related and formulate hypotheses for other systems in terms of their role in phenotype expression. NIBBS is also orders of magnitude faster than MULE, one of the most efficient maximal frequent subgraph mining algorithms that could be adjusted for this problem. Also, the set of phenotype-biased metabolic systems output by NIBBS comes very close to
Gupta, Divya; Kagemann, Larry; Schuman, Joel S.; SundarRaj, Nirmala
2012-01-01
Purpose. This study explored the efficacy of optical coherence tomography (OCT) as a high-resolution, noncontact method for imaging the palisades of Vogt by correlating OCT and confocal microscopy images. Methods. Human limbal rims were acquired and imaged with OCT and confocal microscopy. The area of the epithelial basement membrane in each of these sets was digitally reconstructed, and the models were compared. Results. OCT identified the palisades within the limbus and exhibited excellent structural correlation with immunostained tissue imaged by confocal microscopy. Conclusions. OCT successfully identified the limbal palisades of Vogt that constitute the corneal epithelial stem cell niche. These findings offer the exciting potential to characterize the architecture of the palisades in vivo, to harvest stem cells for transplantation more accurately, to track palisade structure for better diagnosis, follow-up and staging of treatment, and to assess and intervene in the progression of stem cell depletion by monitoring changes in the structure of the palisades. PMID:22266521
NASA Technical Reports Server (NTRS)
Yungster, Shaye; Radhakrishnan, Krishnan
1994-01-01
A new fully implicit, time accurate algorithm suitable for chemically reacting, viscous flows in the transonic-to-hypersonic regime is described. The method is based on a class of Total Variation Diminishing (TVD) schemes and uses successive Gauss-Siedel relaxation sweeps. The inversion of large matrices is avoided by partitioning the system into reacting and nonreacting parts, but still maintaining a fully coupled interaction. As a result, the matrices that have to be inverted are of the same size as those obtained with the commonly used point implicit methods. In this paper we illustrate the applicability of the new algorithm to hypervelocity unsteady combustion applications. We present a series of numerical simulations of the periodic combustion instabilities observed in ballistic-range experiments of blunt projectiles flying at subdetonative speeds through hydrogen-air mixtures. The computed frequencies of oscillation are in excellent agreement with experimental data.
Troeltzsch, Matthias; Liedtke, Jan; Troeltzsch, Volker; Frankenberger, Roland; Steiner, Timm; Troeltzsch, Markus
2012-10-01
Odontomas account for the largest fraction of odontogenic tumors and are frequent causes of tooth impaction. A case of a 13-year-old female patient with an odontoma-associated impaction of a mandibular molar is presented with a review of the literature. Preoperative planning involved simple and convenient methods such as clinical examination and panoramic radiography, which led to a diagnosis of complex odontoma and warranted surgical removal. The clinical diagnosis was confirmed histologically. Multidisciplinary consultation may enable the clinician to find the accurate diagnosis and appropriate therapy based on the clinical and radiographic appearance. Modern radiologic methods such as cone-beam computed tomography or computed tomography should be applied only for special cases, to decrease radiation.
Methods for Predicting Submersible Hydrodynamic Characteristics
1978-07-01
TO PREDICT COMPLETE CONFIGURATION CHARACTERISTICS In this section the previously developed methods are combined to determine... characteristics of individual vehicle components (bodies, tails ), and with their mutual interactions when combined into complete configurations . Each method is...approach used successfully in mis- sile aerodynamics , a set of models was built and tested to obtain systematic data over relevant ranges of geometry
NASA Astrophysics Data System (ADS)
Schiavon, Ricardo P.
2007-07-01
We present a new set of model predictions for 16 Lick absorption line indices from Hδ through Fe5335 and UBV colors for single stellar populations with ages ranging between 1 and 15 Gyr, [Fe/H] ranging from -1.3 to +0.3, and variable abundance ratios. The models are based on accurate stellar parameters for the Jones library stars and a new set of fitting functions describing the behavior of line indices as a function of effective temperature, surface gravity, and iron abundance. The abundances of several key elements in the library stars have been obtained from the literature in order to characterize the abundance pattern of the stellar library, thus allowing us to produce model predictions for any set of abundance ratios desired. We develop a method to estimate mean ages and abundances of iron, carbon, nitrogen, magnesium, and calcium that explores the sensitivity of the various indices modeled to those parameters. The models are compared to high-S/N data for Galactic clusters spanning the range of ages, metallicities, and abundance patterns of interest. Essentially all line indices are matched when the known cluster parameters are adopted as input. Comparing the models to high-quality data for galaxies in the nearby universe, we reproduce previous results regarding the enhancement of light elements and the spread in the mean luminosity-weighted ages of early-type galaxies. When the results from the analysis of blue and red indices are contrasted, we find good consistency in the [Fe/H] that is inferred from different Fe indices. Applying our method to estimate mean ages and abundances from stacked SDSS spectra of early-type galaxies brighter than L*, we find mean luminosity-weighed ages of the order of ~8 Gyr and iron abundances slightly below solar. Abundance ratios, [X/Fe], tend to be higher than solar and are positively correlated with galaxy luminosity. Of all elements, nitrogen is the more strongly correlated with galaxy luminosity, which seems to indicate
Prediction methods of spudcan penetration for jack-up units
NASA Astrophysics Data System (ADS)
Zhang, Ai-xia; Duan, Meng-lan; Li, Hai-ming; Zhao, Jun; Wang, Jian-jun
2012-12-01
Jack-up units are extensively playing a successful role in drilling engineering around the world, and their safety and efficiency take more and more attraction in both research and engineering practice. An accurate prediction of the spudcan penetration depth is quite instrumental in deciding on whether a jack-up unit is feasible to operate at the site. The prediction of a too large penetration depth may lead to the hesitation or even rejection of a site due to potential difficulties in the subsequent extraction process; the same is true of a too small depth prediction due to the problem of possible instability during operation. However, a deviation between predictive results and final field data usually exists, especially when a strong-over-soft soil is included in the strata. The ultimate decision sometimes to a great extent depends on the practical experience, not the predictive results given by the guideline. It is somewhat risky, but no choice. Therefore, a feasible predictive method for the spudcan penetration depth, especially in strata with strong-over-soft soil profile, is urgently needed by the jack-up industry. In view of this, a comprehensive investigation on methods of predicting spudcan penetration is executed. For types of different soil profiles, predictive methods for spudcan penetration depth are proposed, and the corresponding experiment is also conducted to validate these methods. In addition, to further verify the feasibility of the proposed methods, a practical engineering case encountered in the South China Sea is also presented, and the corresponding numerical and experimental results are also presented and discussed.
ESG: extended similarity group method for automated protein function prediction
Chitale, Meghana; Hawkins, Troy; Park, Changsoon; Kihara, Daisuke
2009-01-01
Motivation: Importance of accurate automatic protein function prediction is ever increasing in the face of a large number of newly sequenced genomes and proteomics data that are awaiting biological interpretation. Conventional methods have focused on high sequence similarity-based annotation transfer which relies on the concept of homology. However, many cases have been reported that simple transfer of function from top hits of a homology search causes erroneous annotation. New methods are required to handle the sequence similarity in a more robust way to combine together signals from strongly and weakly similar proteins for effectively predicting function for unknown proteins with high reliability. Results: We present the extended similarity group (ESG) method, which performs iterative sequence database searches and annotates a query sequence with Gene Ontology terms. Each annotation is assigned with probability based on its relative similarity score with the multiple-level neighbors in the protein similarity graph. We will depict how the statistical framework of ESG improves the prediction accuracy by iteratively taking into account the neighborhood of query protein in the sequence similarity space. ESG outperforms conventional PSI-BLAST and the protein function prediction (PFP) algorithm. It is found that the iterative search is effective in capturing multiple-domains in a query protein, enabling accurately predicting several functions which originate from different domains. Availability: ESG web server is available for automated protein function prediction at http://dragon.bio.purdue.edu/ESG/ Contact: cspark@cau.ac.kr; dkihara@purdue.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19435743
NASA Astrophysics Data System (ADS)
Stukel, Michael R.; Landry, Michael R.; Ohman, Mark D.; Goericke, Ralf; Samo, Ty; Benitez-Nelson, Claudia R.
2012-03-01
Despite the increasing use of linear inverse modeling techniques to elucidate fluxes in undersampled marine ecosystems, the accuracy with which they estimate food web flows has not been resolved. New Markov Chain Monte Carlo (MCMC) solution methods have also called into question the biases of the commonly used L2 minimum norm (L 2MN) solution technique. Here, we test the abilities of MCMC and L 2MN methods to recover field-measured ecosystem rates that are sequentially excluded from the model input. For data, we use experimental measurements from process cruises of the California Current Ecosystem (CCE-LTER) Program that include rate estimates of phytoplankton and bacterial production, micro- and mesozooplankton grazing, and carbon export from eight study sites varying from rich coastal upwelling to offshore oligotrophic conditions. Both the MCMC and L 2MN methods predicted well-constrained rates of protozoan and mesozooplankton grazing with reasonable accuracy, but the MCMC method overestimated primary production. The MCMC method more accurately predicted the poorly constrained rate of vertical carbon export than the L 2MN method, which consistently overestimated export. Results involving DOC and bacterial production were equivocal. Overall, when primary production is provided as model input, the MCMC method gives a robust depiction of ecosystem processes. Uncertainty in inverse ecosystem models is large and arises primarily from solution under-determinacy. We thus suggest that experimental programs focusing on food web fluxes expand the range of experimental measurements to include the nature and fate of detrital pools, which play large roles in the model.
A highly accurate method for the determination of mass and center of mass of a spacecraft
NASA Technical Reports Server (NTRS)
Chow, E. Y.; Trubert, M. R.; Egwuatu, A.
1978-01-01
An extremely accurate method for the measurement of mass and the lateral center of mass of a spacecraft has been developed. The method was needed for the Voyager spacecraft mission requirement which limited the uncertainty in the knowledge of lateral center of mass of the spacecraft system weighing 750 kg to be less than 1.0 mm (0.04 in.). The method consists of using three load cells symmetrically located at 120 deg apart on a turntable with respect to the vertical axis of the spacecraft and making six measurements for each load cell. These six measurements are taken by cyclic rotations of the load cell turntable and of the spacecraft, about the vertical axis of the measurement fixture. This method eliminates all alignment, leveling, and load cell calibration errors for the lateral center of mass determination, and permits a statistical best fit of the measurement data. An associated data reduction computer program called MASCM has been written to implement this method and has been used for the Voyager spacecraft.
Efficient and accurate numerical methods for the Klein-Gordon-Schroedinger equations
Bao, Weizhu . E-mail: bao@math.nus.edu.sg; Yang, Li . E-mail: yangli@nus.edu.sg
2007-08-10
In this paper, we present efficient, unconditionally stable and accurate numerical methods for approximations of the Klein-Gordon-Schroedinger (KGS) equations with/without damping terms. The key features of our methods are based on: (i) the application of a time-splitting spectral discretization for a Schroedinger-type equation in KGS (ii) the utilization of Fourier pseudospectral discretization for spatial derivatives in the Klein-Gordon equation in KGS (iii) the adoption of solving the ordinary differential equations (ODEs) in phase space analytically under appropriate chosen transmission conditions between different time intervals or applying Crank-Nicolson/leap-frog for linear/nonlinear terms for time derivatives. The numerical methods are either explicit or implicit but can be solved explicitly, unconditionally stable, and of spectral accuracy in space and second-order accuracy in time. Moreover, they are time reversible and time transverse invariant when there is no damping terms in KGS, conserve (or keep the same decay rate of) the wave energy as that in KGS without (or with a linear) damping term, keep the same dynamics of the mean value of the meson field, and give exact results for the plane-wave solution. Extensive numerical tests are presented to confirm the above properties of our numerical methods for KGS. Finally, the methods are applied to study solitary-wave collisions in one dimension (1D), as well as dynamics of a 2D problem in KGS.
An accurate and efficient bayesian method for automatic segmentation of brain MRI.
Marroquin, J L; Vemuri, B C; Botello, S; Calderon, F; Fernandez-Bouzas, A
2002-08-01
Automatic three-dimensional (3-D) segmentation of the brain from magnetic resonance (MR) scans is a challenging problem that has received an enormous amount of attention lately. Of the techniques reported in the literature, very few are fully automatic. In this paper, we present an efficient and accurate, fully automatic 3-D segmentation procedure for brain MR scans. It has several salient features; namely, the following. 1) Instead of a single multiplicative bias field that affects all tissue intensities, separate parametric smooth models are used for the intensity of each class. 2) A brain atlas is used in conjunction with a robust registration procedure to find a nonrigid transformation that maps the standard brain to the specimen to be segmented. This transformation is then used to: segment the brain from nonbrain tissue; compute prior probabilities for each class at each voxel location and find an appropriate automatic initialization. 3) Finally, a novel algorithm is presented which is a variant of the expectation-maximization procedure, that incorporates a fast and accurate way to find optimal segmentations, given the intensity models along with the spatial coherence assumption. Experimental results with both synthetic and real data are included, as well as comparisons of the performance of our algorithm with that of other published methods.
NASA Astrophysics Data System (ADS)
He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu
2014-11-01
Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.
Random vibration ESS adequacy prediction method
NASA Astrophysics Data System (ADS)
Lambert, Ronald G.
Closed form analytical expressions have been derived and are used as part of the proposed method to quantitatively predict the adequacy of the random vibration portion of an Environmental Stress Screen (ESS) to meet its main objective for screening typical avionics electronic assemblies for workmanship defects without consuming excessive useful life. This method is limited to fatigue related defects (including initial damage/Fracture Mechanics effects) and requires defect fatigue and service environment parameter values. Examples are given to illustrate the method.
Nissley, Daniel A.; Sharma, Ajeet K.; Ahmed, Nabeel; Friedrich, Ulrike A.; Kramer, Günter; Bukau, Bernd; O'Brien, Edward P.
2016-01-01
The rates at which domains fold and codons are translated are important factors in determining whether a nascent protein will co-translationally fold and function or misfold and malfunction. Here we develop a chemical kinetic model that calculates a protein domain's co-translational folding curve during synthesis using only the domain's bulk folding and unfolding rates and codon translation rates. We show that this model accurately predicts the course of co-translational folding measured in vivo for four different protein molecules. We then make predictions for a number of different proteins in yeast and find that synonymous codon substitutions, which change translation-elongation rates, can switch some protein domains from folding post-translationally to folding co-translationally—a result consistent with previous experimental studies. Our approach explains essential features of co-translational folding curves and predicts how varying the translation rate at different codon positions along a transcript's coding sequence affects this self-assembly process. PMID:26887592
Swamidass, S. Joshua; Azencott, Chloé-Agathe; Lin, Ting-Wan; Gramajo, Hugo; Tsai, Sheryl; Baldi, Pierre
2009-01-01
Given activity training data from Hight-Throughput Screening (HTS) experiments, virtual High-Throughput Screening (vHTS) methods aim to predict in silico the activity of untested chemicals. We present a novel method, the Influence Relevance Voter (IRV), specifically tailored for the vHTS task. The IRV is a low-parameter neural network which refines a k-nearest neighbor classifier by non-linearly combining the influences of a chemical's neighbors in the training set. Influences are decomposed, also non-linearly, into a relevance component and a vote component. The IRV is benchmarked using the data and rules of two large, open, competitions, and its performance compared to the performance of other participating methods, as well as of an in-house Support Vector Machine (SVM) method. On these benchmark datasets, IRV achieves state-of-the-art results, comparable to the SVM in one case, and significantly better than the SVM in the other, retrieving three times as many actives in the top 1% of its prediction-sorted list. The IRV presents several other important advantages over SVMs and other methods: (1) the output predictions have a probabilistic semantic; (2) the underlying inferences are interpretable; (3) the training time is very short, on the order of minutes even for very large data sets; (4) the risk of overfitting is minimal, due to the small number of free parameters; and (5) additional information can easily be incorporated into the IRV architecture. Combined with its performance, these qualities make the IRV particularly well suited for vHTS. PMID:19391629
Thompson, A.P.; Swiler, L.P.; Trott, C.R.; Foiles, S.M.; Tucker, G.J.
2015-03-15
We present a new interatomic potential for solids and liquids called Spectral Neighbor Analysis Potential (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected onto a basis of hyperspherical harmonics in four dimensions. The bispectrum components are the same bond-orientational order parameters employed by the GAP potential [1]. The SNAP potential, unlike GAP, assumes a linear relationship between atom energy and bispectrum components. The linear SNAP coefficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. We demonstrate that a previously unnoticed symmetry property can be exploited to reduce the computational cost of the force calculations by more than one order of magnitude. We present results for a SNAP potential for tantalum, showing that it accurately reproduces a range of commonly calculated properties of both the crystalline solid and the liquid phases. In addition, unlike simpler existing potentials, SNAP correctly predicts the energy barrier for screw dislocation migration in BCC tantalum.
NASA Astrophysics Data System (ADS)
Thompson, A. P.; Swiler, L. P.; Trott, C. R.; Foiles, S. M.; Tucker, G. J.
2015-03-01
We present a new interatomic potential for solids and liquids called Spectral Neighbor Analysis Potential (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected onto a basis of hyperspherical harmonics in four dimensions. The bispectrum components are the same bond-orientational order parameters employed by the GAP potential [1]. The SNAP potential, unlike GAP, assumes a linear relationship between atom energy and bispectrum components. The linear SNAP coefficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. We demonstrate that a previously unnoticed symmetry property can be exploited to reduce the computational cost of the force calculations by more than one order of magnitude. We present results for a SNAP potential for tantalum, showing that it accurately reproduces a range of commonly calculated properties of both the crystalline solid and the liquid phases. In addition, unlike simpler existing potentials, SNAP correctly predicts the energy barrier for screw dislocation migration in BCC tantalum.
Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning
2016-01-01
Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time. PMID:27941705
Kliment, Corrine R; Englert, Judson M; Crum, Lauren P; Oury, Tim D
2011-01-01
Aim: The purpose of this study was to develop an improved method for collagen and protein assessment of fibrotic lungs while decreasing animal use. methods: 8-10 week old, male C57BL/6 mice were given a single intratracheal instillation of crocidolite asbestos or control titanium dioxide. Lungs were collected on day 14 and dried as whole lung, or homogenized in CHAPS buffer, for hydroxyproline analysis. Insoluble and salt-soluble collagen content was also determined in lung homogenates using a modified Sirius red colorimetric 96-well plate assay. results: The hydroxyproline assay showed significant increases in collagen content in the lungs of asbestos-treated mice. Identical results were present between collagen content determined on dried whole lung or whole lung homogenates. The Sirius red plate assay showed a significant increase in collagen content in lung homogenates however, this assay grossly over-estimated the total amount of collagen and underestimated changes between control and fibrotic lungs, conclusions: The proposed method provides accurate quantification of collagen content in whole lungs and additional homogenate samples for biochemical analysis from a single animal. The Sirius-red colorimetric plate assay provides a complementary method for determination of the relative changes in lung collagen but the values tend to overestimate absolute values obtained by the gold standard hydroxyproline assay and underestimate the overall fibrotic injury. PMID:21577320
Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning
2016-12-09
Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time.
Takahashi, F; Endo, A
2007-01-01
A system utilising radiation transport codes has been developed to derive accurate dose distributions in a human body for radiological accidents. A suitable model is quite essential for a numerical analysis. Therefore, two tools were developed to setup a 'problem-dependent' input file, defining a radiation source and an exposed person to simulate the radiation transport in an accident with the Monte Carlo calculation codes-MCNP and MCNPX. Necessary resources are defined by a dialogue method with a generally used personal computer for both the tools. The tools prepare human body and source models described in the input file format of the employed Monte Carlo codes. The tools were validated for dose assessment in comparison with a past criticality accident and a hypothesized exposure.
Methods to achieve accurate projection of regional and global raster databases
Usery, E. Lynn; Seong, Jeong Chang; Steinwand, Dan
2002-01-01
Modeling regional and global activities of climatic and human-induced change requires accurate geographic data from which we can develop mathematical and statistical tabulations of attributes and properties of the environment. Many of these models depend on data formatted as raster cells or matrices of pixel values. Recently, it has been demonstrated that regional and global raster datasets are subject to significant error from mathematical projection and that these errors are of such magnitude that model results may be jeopardized (Steinwand, et al., 1995; Yang, et al., 1996; Usery and Seong, 2001; Seong and Usery, 2001). There is a need to develop methods of projection that maintain the accuracy of these datasets to support regional and global analyses and modeling
Evaluation of ride quality prediction methods for operational military helicopters
NASA Technical Reports Server (NTRS)
Leatherwood, J. D.; Clevenson, S. A.; Hollenbaugh, D. D.
1984-01-01
The results of a simulator study conducted to compare and validate various ride quality prediction methods for use in assessing passenger/crew ride comfort within helicopters are presented. Included are results quantifying 35 helicopter pilots' discomfort responses to helicopter interior noise and vibration typical of routine flights, assessment of various ride quality metrics including the NASA ride comfort model, and examination of possible criteria approaches. Results of the study indicated that crew discomfort results from a complex interaction between vibration and interior noise. Overall measures such as weighted or unweighted root-mean-square acceleration level and A-weighted noise level were not good predictors of discomfort. Accurate prediction required a metric incorporating the interactive effects of both noise and vibration. The best metric for predicting crew comfort to the combined noise and vibration environment was the NASA discomfort index.
Methods for accurate analysis of galaxy clustering on non-linear scales
NASA Astrophysics Data System (ADS)
Vakili, Mohammadjavad
2017-01-01
Measurements of galaxy clustering with the low-redshift galaxy surveys provide sensitive probe of cosmology and growth of structure. Parameter inference with galaxy clustering relies on computation of likelihood functions which requires estimation of the covariance matrix of the observables used in our analyses. Therefore, accurate estimation of the covariance matrices serves as one of the key ingredients in precise cosmological parameter inference. This requires generation of a large number of independent galaxy mock catalogs that accurately describe the statistical distribution of galaxies in a wide range of physical scales. We present a fast method based on low-resolution N-body simulations and approximate galaxy biasing technique for generating mock catalogs. Using a reference catalog that was created using the high resolution Big-MultiDark N-body simulation, we show that our method is able to produce catalogs that describe galaxy clustering at a percentage-level accuracy down to highly non-linear scales in both real-space and redshift-space.In most large-scale structure analyses, modeling of galaxy bias on non-linear scales is performed assuming a halo model. Clustering of dark matter halos has been shown to depend on halo properties beyond mass such as halo concentration, a phenomenon referred to as assembly bias. Standard large-scale structure studies assume that halo mass alone is sufficient in characterizing the connection between galaxies and halos. However, modeling of galaxy bias can face systematic effects if the number of galaxies are correlated with other halo properties. Using the Small MultiDark-Planck high resolution N-body simulation and the clustering measurements of Sloan Digital Sky Survey DR7 main galaxy sample, we investigate the extent to which the dependence of galaxy bias on halo concentration can improve our modeling of galaxy clustering.
Improving the full spectrum fitting method: accurate convolution with Gauss-Hermite functions
NASA Astrophysics Data System (ADS)
Cappellari, Michele
2017-04-01
I start by providing an updated summary of the penalized pixel-fitting (PPXF) method that is used to extract the stellar and gas kinematics, as well as the stellar population of galaxies, via full spectrum fitting. I then focus on the problem of extracting the kinematics when the velocity dispersion σ is smaller than the velocity sampling ΔV that is generally, by design, close to the instrumental dispersion σinst. The standard approach consists of convolving templates with a discretized kernel, while fitting for its parameters. This is obviously very inaccurate when σ ≲ ΔV/2, due to undersampling. Oversampling can prevent this, but it has drawbacks. Here I present a more accurate and efficient alternative. It avoids the evaluation of the undersampled kernel and instead directly computes its well-sampled analytic Fourier transform, for use with the convolution theorem. A simple analytic transform exists when the kernel is described by the popular Gauss-Hermite parametrization (which includes the Gaussian as special case) for the line-of-sight velocity distribution. I describe how this idea was implemented in a significant upgrade to the publicly available PPXF software. The key advantage of the new approach is that it provides accurate velocities regardless of σ. This is important e.g. for spectroscopic surveys targeting galaxies with σ ≪ σinst, for galaxy redshift determinations or for measuring line-of-sight velocities of individual stars. The proposed method could also be used to fix Gaussian convolution algorithms used in today's popular software packages.
An Inexpensive, Accurate, and Precise Wet-Mount Method for Enumerating Aquatic Viruses
Cunningham, Brady R.; Brum, Jennifer R.; Schwenck, Sarah M.; Sullivan, Matthew B.
2015-01-01
Viruses affect biogeochemical cycling, microbial mortality, gene flow, and metabolic functions in diverse environments through infection and lysis of microorganisms. Fundamental to quantitatively investigating these roles is the determination of viral abundance in both field and laboratory samples. One current, widely used method to accomplish this with aquatic samples is the “filter mount” method, in which samples are filtered onto costly 0.02-μm-pore-size ceramic filters for enumeration of viruses by epifluorescence microscopy. Here we describe a cost-effective (ca. 500-fold-lower materials cost) alternative virus enumeration method in which fluorescently stained samples are wet mounted directly onto slides, after optional chemical flocculation of viruses in samples with viral concentrations of <5 × 107 viruses ml−1. The concentration of viruses in the sample is then determined from the ratio of viruses to a known concentration of added microsphere beads via epifluorescence microscopy. Virus concentrations obtained by using this wet-mount method, with and without chemical flocculation, were significantly correlated with, and had precision equivalent to, those obtained by the filter mount method across concentrations ranging from 2.17 × 106 to 1.37 × 108 viruses ml−1 when tested by using cultivated viral isolates and natural samples from marine and freshwater environments. In summary, the wet-mount method is significantly less expensive than the filter mount method and is appropriate for rapid, precise, and accurate enumeration of aquatic viruses over a wide range of viral concentrations (≥1 × 106 viruses ml−1) encountered in field and laboratory samples. PMID:25710369
An inexpensive, accurate, and precise wet-mount method for enumerating aquatic viruses.
Cunningham, Brady R; Brum, Jennifer R; Schwenck, Sarah M; Sullivan, Matthew B; John, Seth G
2015-05-01
Viruses affect biogeochemical cycling, microbial mortality, gene flow, and metabolic functions in diverse environments through infection and lysis of microorganisms. Fundamental to quantitatively investigating these roles is the determination of viral abundance in both field and laboratory samples. One current, widely used method to accomplish this with aquatic samples is the "filter mount" method, in which samples are filtered onto costly 0.02-μm-pore-size ceramic filters for enumeration of viruses by epifluorescence microscopy. Here we describe a cost-effective (ca. 500-fold-lower materials cost) alternative virus enumeration method in which fluorescently stained samples are wet mounted directly onto slides, after optional chemical flocculation of viruses in samples with viral concentrations of <5×10(7) viruses ml(-1). The concentration of viruses in the sample is then determined from the ratio of viruses to a known concentration of added microsphere beads via epifluorescence microscopy. Virus concentrations obtained by using this wet-mount method, with and without chemical flocculation, were significantly correlated with, and had precision equivalent to, those obtained by the filter mount method across concentrations ranging from 2.17×10(6) to 1.37×10(8) viruses ml(-1) when tested by using cultivated viral isolates and natural samples from marine and freshwater environments. In summary, the wet-mount method is significantly less expensive than the filter mount method and is appropriate for rapid, precise, and accurate enumeration of aquatic viruses over a wide range of viral concentrations (≥1×10(6) viruses ml(-1)) encountered in field and laboratory samples.
Soft Computing Methods for Disulfide Connectivity Prediction
Márquez-Chamorro, Alfonso E.; Aguilar-Ruiz, Jesús S.
2015-01-01
The problem of protein structure prediction (PSP) is one of the main challenges in structural bioinformatics. To tackle this problem, PSP can be divided into several subproblems. One of these subproblems is the prediction of disulfide bonds. The disulfide connectivity prediction problem consists in identifying which nonadjacent cysteines would be cross-linked from all possible candidates. Determining the disulfide bond connectivity between the cysteines of a protein is desirable as a previous step of the 3D PSP, as the protein conformational search space is highly reduced. The most representative soft computing approaches for the disulfide bonds connectivity prediction problem of the last decade are summarized in this paper. Certain aspects, such as the different methodologies based on soft computing approaches (artificial neural network or support vector machine) or features of the algorithms, are used for the classification of these methods. PMID:26523116
TIMP2•IGFBP7 biomarker panel accurately predicts acute kidney injury in high-risk surgical patients
Gunnerson, Kyle J.; Shaw, Andrew D.; Chawla, Lakhmir S.; Bihorac, Azra; Al-Khafaji, Ali; Kashani, Kianoush; Lissauer, Matthew; Shi, Jing; Walker, Michael G.; Kellum, John A.
2016-01-01
BACKGROUND Acute kidney injury (AKI) is an important complication in surgical patients. Existing biomarkers and clinical prediction models underestimate the risk for developing AKI. We recently reported data from two trials of 728 and 408 critically ill adult patients in whom urinary TIMP2•IGFBP7 (NephroCheck, Astute Medical) was used to identify patients at risk of developing AKI. Here we report a preplanned analysis of surgical patients from both trials to assess whether urinary tissue inhibitor of metalloproteinase 2 (TIMP-2) and insulin-like growth factor–binding protein 7 (IGFBP7) accurately identify surgical patients at risk of developing AKI. STUDY DESIGN We enrolled adult surgical patients at risk for AKI who were admitted to one of 39 intensive care units across Europe and North America. The primary end point was moderate-severe AKI (equivalent to KDIGO [Kidney Disease Improving Global Outcomes] stages 2–3) within 12 hours of enrollment. Biomarker performance was assessed using the area under the receiver operating characteristic curve, integrated discrimination improvement, and category-free net reclassification improvement. RESULTS A total of 375 patients were included in the final analysis of whom 35 (9%) developed moderate-severe AKI within 12 hours. The area under the receiver operating characteristic curve for [TIMP-2]•[IGFBP7] alone was 0.84 (95% confidence interval, 0.76–0.90; p < 0.0001). Biomarker performance was robust in sensitivity analysis across predefined subgroups (urgency and type of surgery). CONCLUSION For postoperative surgical intensive care unit patients, a single urinary TIMP2•IGFBP7 test accurately identified patients at risk for developing AKI within the ensuing 12 hours and its inclusion in clinical risk prediction models significantly enhances their performance. LEVEL OF EVIDENCE Prognostic study, level I. PMID:26816218
Bozkaya, Uğur
2013-10-21
The extended Koopmans' theorem (EKT) provides a straightforward way to compute ionization potentials (IPs) from any level of theory, in principle. However, for non-variational methods, such as Møller-Plesset perturbation and coupled-cluster theories, the EKT computations can only be performed as by-products of analytic gradients as the relaxed generalized Fock matrix (GFM) and one- and two-particle density matrices (OPDM and TPDM, respectively) are required [J. Cioslowski, P. Piskorz, and G. Liu, J. Chem. Phys. 107, 6804 (1997)]. However, for the orbital-optimized methods both the GFM and OPDM are readily available and symmetric, as opposed to the standard post Hartree-Fock (HF) methods. Further, the orbital optimized methods solve the N-representability problem, which may arise when the relaxed particle density matrices are employed for the standard methods, by disregarding the orbital Z-vector contributions for the OPDM. Moreover, for challenging chemical systems, where spin or spatial symmetry-breaking problems are observed, the abnormal orbital response contributions arising from the numerical instabilities in the HF molecular orbital Hessian can be avoided by the orbital-optimization. Hence, it appears that the orbital-optimized methods are the most natural choice for the study of the EKT. In this research, the EKT for the orbital-optimized methods, such as orbital-optimized second- and third-order Møller-Plesset perturbation [U. Bozkaya, J. Chem. Phys. 135, 224103 (2011)] and coupled-electron pair theories [OCEPA(0)] [U. Bozkaya and C. D. Sherrill, J. Chem. Phys. 139, 054104 (2013)], are presented. The presented methods are applied to IPs of the second- and third-row atoms, and closed- and open-shell molecules. Performances of the orbital-optimized methods are compared with those of the counterpart standard methods. Especially, results of the OCEPA(0) method (with the aug-cc-pVTZ basis set) for the lowest IPs of the considered atoms and closed
Conservative high-order-accurate finite-difference methods for curvilinear grids
NASA Technical Reports Server (NTRS)
Rai, Man M.; Chakrvarthy, Sukumar
1993-01-01
Two fourth-order-accurate finite-difference methods for numerically solving hyperbolic systems of conservation equations on smooth curvilinear grids are presented. The first method uses the differential form of the conservation equations; the second method uses the integral form of the conservation equations. Modifications to these schemes, which are required near boundaries to maintain overall high-order accuracy, are discussed. An analysis that demonstrates the stability of the modified schemes is also provided. Modifications to one of the schemes to make it total variation diminishing (TVD) are also discussed. Results that demonstrate the high-order accuracy of both schemes are included in the paper. In particular, a Ringleb-flow computation demonstrates the high-order accuracy and the stability of the boundary and near-boundary procedures. A second computation of supersonic flow over a cylinder demonstrates the shock-capturing capability of the TVD methodology. An important contribution of this paper is the dear demonstration that higher order accuracy leads to increased computational efficiency.
A Method for Accurate Reconstructions of the Upper Airway Using Magnetic Resonance Images
Xiong, Huahui; Huang, Xiaoqing; Li, Yong; Li, Jianhong; Xian, Junfang; Huang, Yaqi
2015-01-01
Objective The purpose of this study is to provide an optimized method to reconstruct the structure of the upper airway (UA) based on magnetic resonance imaging (MRI) that can faithfully show the anatomical structure with a smooth surface without artificial modifications. Methods MRI was performed on the head and neck of a healthy young male participant in the axial, coronal and sagittal planes to acquire images of the UA. The level set method was used to segment the boundary of the UA. The boundaries in the three scanning planes were registered according to the positions of crossing points and anatomical characteristics using a Matlab program. Finally, the three-dimensional (3D) NURBS (Non-Uniform Rational B-Splines) surface of the UA was constructed using the registered boundaries in all three different planes. Results A smooth 3D structure of the UA was constructed, which captured the anatomical features from the three anatomical planes, particularly the location of the anterior wall of the nasopharynx. The volume and area of every cross section of the UA can be calculated from the constructed 3D model of UA. Conclusions A complete scheme of reconstruction of the UA was proposed, which can be used to measure and evaluate the 3D upper airway accurately. PMID:26066461
Keeping the edge: an accurate numerical method to solve the stream power law
NASA Astrophysics Data System (ADS)
Campforts, B.; Govers, G.
2015-12-01
Bedrock rivers set the base level of surrounding hill slopes and mediate the dynamic interplay between mountain building and denudation. The propensity of rivers to preserve pulses of increased tectonic uplift also allows to reconstruct long term uplift histories from longitudinal river profiles. An accurate reconstruction of river profile development at different timescales is therefore essential. Long term river development is typically modeled by means of the stream power law. Under specific conditions this equation can be solved analytically but numerical Finite Difference Methods (FDMs) are most frequently used. Nonetheless, FDMs suffer from numerical smearing, especially at knickpoint zones which are key to understand transient landscapes. Here, we solve the stream power law by means of a Finite Volume Method (FVM) which is Total Variation Diminishing (TVD). Total volume methods are designed to simulate sharp discontinuities making them very suitable to model river incision. In contrast to FDMs, the TVD_FVM is well capable of preserving knickpoints as illustrated for the fast propagating Niagara falls. Moreover, we show that the TVD_FVM performs much better when reconstructing uplift at timescales exceeding 100 Myr, using Eastern Australia as an example. Finally, uncertainty associated with parameter calibration is dramatically reduced when the TVD_FVM is applied. Therefore, the use of a TVD_FVM to understand long term landscape evolution is an important addition to the toolbox at the disposition of geomorphologists.
Simons, Craig J; Cobb, Loren; Davidson, Bradley S
2014-04-01
In vivo measurement of lumbar spine configuration is useful for constructing quantitative biomechanical models. Positional magnetic resonance imaging (MRI) accommodates a larger range of movement in most joints than conventional MRI and does not require a supine position. However, this is achieved at the expense of image resolution and contrast. As a result, quantitative research using positional MRI has required long reconstruction times and is sensitive to incorrectly identifying the vertebral boundary due to low contrast between bone and surrounding tissue in the images. We present a semi-automated method used to obtain digitized reconstructions of lumbar vertebrae in any posture of interest. This method combines a high-resolution reference scan with a low-resolution postural scan to provide a detailed and accurate representation of the vertebrae in the posture of interest. Compared to a criterion standard, translational reconstruction error ranged from 0.7 to 1.6 mm and rotational reconstruction error ranged from 0.3 to 2.6°. Intraclass correlation coefficients indicated high interrater reliability for measurements within the imaging plane (ICC 0.97-0.99). Computational efficiency indicates that this method may be used to compile data sets large enough to account for population variance, and potentially expand the use of positional MRI as a quantitative biomechanics research tool.
Oyedepo, Gbenga A; Wilson, Angela K
2010-08-26
The correlation consistent Composite Approach, ccCA [ Deyonker , N. J. ; Cundari , T. R. ; Wilson , A. K. J. Chem. Phys. 2006 , 124 , 114104 ] has been demonstrated to predict accurate thermochemical properties of chemical species that can be described by a single configurational reference state, and at reduced computational cost, as compared with ab initio methods such as CCSD(T) used in combination with large basis sets. We have developed three variants of a multireference equivalent of this successful theoretical model. The method, called the multireference correlation consistent composite approach (MR-ccCA), is designed to predict the thermochemical properties of reactive intermediates, excited state species, and transition states to within chemical accuracy (e.g., 1 kcal/mol for enthalpies of formation) of reliable experimental values. In this study, we have demonstrated the utility of MR-ccCA: (1) in the determination of the adiabatic singlet-triplet energy separations and enthalpies of formation for the ground states for a set of diradicals and unsaturated compounds, and (2) in the prediction of energetic barriers to internal rotation, in ethylene and its heavier congener, disilene. Additionally, we have utilized MR-ccCA to predict the enthalpies of formation of the low-lying excited states of all the species considered. MR-ccCA is shown to give quantitative results without reliance upon empirically derived parameters, making it suitable for application to study novel chemical systems with significant nondynamical correlation effects.
NASA Astrophysics Data System (ADS)
Lee, Jeongjin; Kim, Namkug; Lee, Ho; Seo, Joon Beom; Won, Hyung Jin; Shin, Yong Moon; Shin, Yeong Gil
2007-03-01
Automatic liver segmentation is still a challenging task due to the ambiguity of liver boundary and the complex context of nearby organs. In this paper, we propose a faster and more accurate way of liver segmentation in CT images with an enhanced level set method. The speed image for level-set propagation is smoothly generated by increasing number of iterations in anisotropic diffusion filtering. This prevents the level-set propagation from stopping in front of local minima, which prevails in liver CT images due to irregular intensity distributions of the interior liver region. The curvature term of shape modeling level-set method captures well the shape variations of the liver along the slice. Finally, rolling ball algorithm is applied for including enhanced vessels near the liver boundary. Our approach are tested and compared to manual segmentation results of eight CT scans with 5mm slice distance using the average distance and volume error. The average distance error between corresponding liver boundaries is 1.58 mm and the average volume error is 2.2%. The average processing time for the segmentation of each slice is 5.2 seconds, which is much faster than the conventional ones. Accurate and fast result of our method will expedite the next stage of liver volume quantification for liver transplantations.
Novel hyperspectral prediction method and apparatus
NASA Astrophysics Data System (ADS)
Kemeny, Gabor J.; Crothers, Natalie A.; Groth, Gard A.; Speck, Kathy A.; Marbach, Ralf
2009-05-01
Both the power and the challenge of hyperspectral technologies is the very large amount of data produced by spectral cameras. While off-line methodologies allow the collection of gigabytes of data, extended data analysis sessions are required to convert the data into useful information. In contrast, real-time monitoring, such as on-line process control, requires that compression of spectral data and analysis occur at a sustained full camera data rate. Efficient, high-speed practical methods for calibration and prediction are therefore sought to optimize the value of hyperspectral imaging. A novel method of matched filtering known as science based multivariate calibration (SBC) was developed for hyperspectral calibration. Classical (MLR) and inverse (PLS, PCR) methods are combined by spectroscopically measuring the spectral "signal" and by statistically estimating the spectral "noise." The accuracy of the inverse model is thus combined with the easy interpretability of the classical model. The SBC method is optimized for hyperspectral data in the Hyper-CalTM software used for the present work. The prediction algorithms can then be downloaded into a dedicated FPGA based High-Speed Prediction EngineTM module. Spectral pretreatments and calibration coefficients are stored on interchangeable SD memory cards, and predicted compositions are produced on a USB interface at real-time camera output rates. Applications include minerals, pharmaceuticals, food processing and remote sensing.
Mezei, Pál D; Csonka, Gábor I; Ruzsinszky, Adrienn; Sun, Jianwei
2015-01-13
A correct description of the anion-π interaction is essential for the design of selective anion receptors and channels and important for advances in the field of supramolecular chemistry. However, it is challenging to do accurate, precise, and efficient calculations of this interaction, which are lacking in the literature. In this article, by testing sets of 20 binary anion-π complexes of fluoride, chloride, bromide, nitrate, or carbonate ions with hexafluorobenzene, 1,3,5-trifluorobenzene, 2,4,6-trifluoro-1,3,5-triazine, or 1,3,5-triazine and 30 ternary π-anion-π' sandwich complexes composed from the same monomers, we suggest domain-based local-pair natural orbital coupled cluster energies extrapolated to the complete basis-set limit as reference values. We give a detailed explanation of the origin of anion-π interactions, using the permanent quadrupole moments, static dipole polarizabilities, and electrostatic potential maps. We use symmetry-adapted perturbation theory (SAPT) to calculate the components of the anion-π interaction energies. We examine the performance of the direct random phase approximation (dRPA), the second-order screened exchange (SOSEX), local-pair natural-orbital (LPNO) coupled electron pair approximation (CEPA), and several dispersion-corrected density functionals (including generalized gradient approximation (GGA), meta-GGA, and double hybrid density functional). The LPNO-CEPA/1 results show the best agreement with the reference results. The dRPA method is only slightly less accurate and precise than the LPNO-CEPA/1, but it is considerably more efficient (6-17 times faster) for the binary complexes studied in this paper. For 30 ternary π-anion-π' sandwich complexes, we give dRPA interaction energies as reference values. The double hybrid functionals are much more efficient but less accurate and precise than dRPA. The dispersion-corrected double hybrid PWPB95-D3(BJ) and B2PLYP-D3(BJ) functionals perform better than the GGA and meta
A Monte Carlo Method for Making the SDSS u-Band Magnitude More Accurate
NASA Astrophysics Data System (ADS)
Gu, Jiayin; Du, Cuihua; Zuo, Wenbo; Jing, Yingjie; Wu, Zhenyu; Ma, Jun; Zhou, Xu
2016-10-01
We develop a new Monte Carlo-based method to convert the Sloan Digital Sky Survey (SDSS) u-band magnitude to the south Galactic Cap of the u-band Sky Survey (SCUSS) u-band magnitude. Due to the increased accuracy of SCUSS u-band measurements, the converted u-band magnitude becomes more accurate compared with the original SDSS u-band magnitude, in particular at the faint end. The average u-magnitude error (for both SDSS and SCUSS) of numerous main-sequence stars with 0.2\\lt g-r\\lt 0.8 increases as the g-band magnitude becomes fainter. When g = 19.5, the average magnitude error of the SDSS u is 0.11. When g = 20.5, the average SDSS u error rises to 0.22. However, at this magnitude, the average magnitude error of the SCUSS u is just half as much as that of the SDSS u. The SDSS u-band magnitudes of main-sequence stars with 0.2\\lt g-r\\lt 0.8 and 18.5\\lt g\\lt 20.5 are converted, therefore the maximum average error of the converted u-band magnitudes is 0.11. The potential application of this conversion is to derive a more accurate photometric metallicity calibration from SDSS observations, especially for the more distant stars. Thus, we can explore stellar metallicity distributions either in the Galactic halo or some stream stars.
A Weight-Averaged Interpolation Method for Coupling Time-Accurate Rarefied and Continuum Flows
NASA Astrophysics Data System (ADS)
Diaz, Steven William
A novel approach to coupling rarefied and continuum flow regimes as a single, hybrid model is introduced. The method borrows from techniques used in the simulation of spray flows to interpolate Lagrangian point-particles onto an Eulerian grid in a weight-averaged sense. A brief overview of traditional methods for modeling both rarefied and continuum domains is given, and a review of the literature regarding rarefied/continuum flow coupling is presented. Details of the theoretical development of the method of weighted interpolation are then described. The method evaluates macroscopic properties at the nodes of a CFD grid via the weighted interpolation of all simulated molecules in a set surrounding the node. The weight factor applied to each simulated molecule is the inverse of the linear distance between it and the given node. During development, the method was applied to several preliminary cases, including supersonic flow over an airfoil, subsonic flow over tandem airfoils, and supersonic flow over a backward facing step; all at low Knudsen numbers. The main thrust of the research centered on the time-accurate expansion of a rocket plume into a near-vacuum. The method proves flexible enough to be used with various flow solvers, demonstrated by the use of Fluent as the continuum solver for the preliminary cases and a NASA-developed Large Eddy Simulation research code, WRLES, for the full lunar model. The method is applicable to a wide range of Mach numbers and is completely grid independent, allowing the rarefied and continuum solvers to be optimized for their respective domains without consideration of the other. The work presented demonstrates the validity, and flexibility of the method of weighted interpolation as a novel concept in the field of hybrid flow coupling. The method marks a significant divergence from current practices in the coupling of rarefied and continuum flow domains and offers a kernel on which to base an ongoing field of research. It has the
Adaptive and accurate color edge extraction method for one-shot shape acquisition
NASA Astrophysics Data System (ADS)
Yin, Wei; Cheng, Xiaosheng; Cui, Haihua; Li, Dawei; Zhou, Lei
2016-09-01
This paper presents an approach to extract accurate color edge information using encoded patterns in hue, saturation, and intensity (HSI) color space. This method is applied to one-shot shape acquisition. Theoretical analysis shows that the hue transition between primary and secondary colors in a color edge is based on light interference and diffraction. We set up a color transition model to illustrate the hue transition on an edge and then define the segmenting position of two stripes. By setting up an adaptive HSI color space, the colors of the stripes and subpixel edges are obtained precisely without a dark laboratory environment, in a low-cost processing algorithm. Since this method does not have any constraints for colors of neighboring stripes, the encoding is an easy procedure. The experimental results show that the edges of dense modulation patterns can be obtained under a complicated environment illumination, and the precision can ensure that the three-dimensional shape of the object is obtained reliably with only one image.
Joo, Jong Wha J; Kang, Eun Yong; Org, Elin; Furlotte, Nick; Parks, Brian; Hormozdiari, Farhad; Lusis, Aldons J; Eskin, Eleazar
2016-12-01
A typical genome-wide association study tests correlation between a single phenotype and each genotype one at a time. However, single-phenotype analysis might miss unmeasured aspects of complex biological networks. Analyzing many phenotypes simultaneously may increase the power to capture these unmeasured aspects and detect more variants. Several multivariate approaches aim to detect variants related to more than one phenotype, but these current approaches do not consider the effects of population structure. As a result, these approaches may result in a significant amount of false positive identifications. Here, we introduce a new methodology, referred to as GAMMA for generalized analysis of molecular variance for mixed-model analysis, which is capable of simultaneously analyzing many phenotypes and correcting for population structure. In a simulated study using data implanted with true genetic effects, GAMMA accurately identifies these true effects without producing false positives induced by population structure. In simulations with this data, GAMMA is an improvement over other methods which either fail to detect true effects or produce many false positive identifications. We further apply our method to genetic studies of yeast and gut microbiome from mice and show that GAMMA identifies several variants that are likely to have true biological mechanisms.
An Accurate Method for Measuring Airplane-Borne Conformal Antenna's Radar Cross Section
NASA Astrophysics Data System (ADS)
Guo, Shuxia; Zhang, Lei; Wang, Yafeng; Hu, Chufeng
2016-09-01
The airplane-borne conformal antenna attaches itself tightly with the airplane skin, so the conventional measurement method cannot determine the contribution of the airplane-borne conformal antenna to its radar cross section (RCS). This paper uses the 2D microwave imaging to isolate and extract the distribution of the reflectivity of the airplane-borne conformal antenna. It obtains the 2D spatial spectra of the conformal antenna through the wave spectral transform between the 2D spatial image and the 2D spatial spectrum. After the interpolation from the rectangular coordinate domain to the polar coordinate domain, the spectral domain data for the variation of the scatter of the conformal antenna with frequency and angle is obtained. The experimental results show that the measurement method proposed in this paper greatly enhances the airplane-borne conformal antenna's RCS measurement accuracy, essentially eliminates the influences caused by the airplane skin and more accurately reveals the airplane-borne conformal antenna's RCS scatter properties.
MASCG: Multi-Atlas Segmentation Constrained Graph method for accurate segmentation of hip CT images.
Chu, Chengwen; Bai, Junjie; Wu, Xiaodong; Zheng, Guoyan
2015-12-01
This paper addresses the issue of fully automatic segmentation of a hip CT image with the goal to preserve the joint structure for clinical applications in hip disease diagnosis and treatment. For this purpose, we propose a Multi-Atlas Segmentation Constrained Graph (MASCG) method. The MASCG method uses multi-atlas based mesh fusion results to initialize a bone sheetness based multi-label graph cut for an accurate hip CT segmentation which has the inherent advantage of automatic separation of the pelvic region from the bilateral proximal femoral regions. We then introduce a graph cut constrained graph search algorithm to further improve the segmentation accuracy around the bilateral hip joint regions. Taking manual segmentation as the ground truth, we evaluated the present approach on 30 hip CT images (60 hips) with a 15-fold cross validation. When the present approach was compared to manual segmentation, an average surface distance error of 0.30 mm, 0.29 mm, and 0.30 mm was found for the pelvis, the left proximal femur, and the right proximal femur, respectively. A further look at the bilateral hip joint regions demonstrated an average surface distance error of 0.16 mm, 0.21 mm and 0.20 mm for the acetabulum, the left femoral head, and the right femoral head, respectively.
Accurate computation of surface stresses and forces with immersed boundary methods
NASA Astrophysics Data System (ADS)
Goza, Andres; Liska, Sebastian; Morley, Benjamin; Colonius, Tim
2016-09-01
Many immersed boundary methods solve for surface stresses that impose the velocity boundary conditions on an immersed body. These surface stresses may contain spurious oscillations that make them ill-suited for representing the physical surface stresses on the body. Moreover, these inaccurate stresses often lead to unphysical oscillations in the history of integrated surface forces such as the coefficient of lift. While the errors in the surface stresses and forces do not necessarily affect the convergence of the velocity field, it is desirable, especially in fluid-structure interaction problems, to obtain smooth and convergent stress distributions on the surface. To this end, we show that the equation for the surface stresses is an integral equation of the first kind whose ill-posedness is the source of spurious oscillations in the stresses. We also demonstrate that for sufficiently smooth delta functions, the oscillations may be filtered out to obtain physically accurate surface stresses. The filtering is applied as a post-processing procedure, so that the convergence of the velocity field is unaffected. We demonstrate the efficacy of the method by computing stresses and forces that converge to the physical stresses and forces for several test problems.
Methods for accurate cold-chain temperature monitoring using digital data-logger thermometers
NASA Astrophysics Data System (ADS)
Chojnacky, M. J.; Miller, W. M.; Strouse, G. F.
2013-09-01
Complete and accurate records of vaccine temperature history are vital to preserving drug potency and patient safety. However, previously published vaccine storage and handling guidelines have failed to indicate a need for continuous temperature monitoring in vaccine storage refrigerators. We evaluated the performance of seven digital data logger models as candidates for continuous temperature monitoring of refrigerated vaccines, based on the following criteria: out-of-box performance and compliance with manufacturer accuracy specifications over the range of use; measurement stability over extended, continuous use; proper setup in a vaccine storage refrigerator so that measurements reflect liquid vaccine temperatures; and practical methods for end-user validation and establishing metrological traceability. Data loggers were tested using ice melting point checks and by comparison to calibrated thermocouples to characterize performance over 0 °C to 10 °C. We also monitored logger performance in a study designed to replicate the range of vaccine storage and environmental conditions encountered at provider offices. Based on the results of this study, the Centers for Disease Control released new guidelines on proper methods for storage, handling, and temperature monitoring of vaccines for participants in its federally-funded Vaccines for Children Program. Improved temperature monitoring practices will ultimately decrease waste from damaged vaccines, improve consumer confidence, and increase effective inoculation rates.
NASA Astrophysics Data System (ADS)
Sagui, Celeste
2006-03-01
An accurate and numerically efficient treatment of electrostatics is essential for biomolecular simulations, as this stabilizes much of the delicate 3-d structure associated with biomolecules. Currently, force fields such as AMBER and CHARMM assign ``partial charges'' to every atom in a simulation in order to model the interatomic electrostatic forces, so that the calculation of the electrostatics rapidly becomes the computational bottleneck in large-scale simulations. There are two main issues associated with the current treatment of classical electrostatics: (i) how does one eliminate the artifacts associated with the point-charges (e.g., the underdetermined nature of the current RESP fitting procedure for large, flexible molecules) used in the force fields in a physically meaningful way? (ii) how does one efficiently simulate the very costly long-range electrostatic interactions? Recently, we have dealt with both of these challenges as follows. In order to improve the description of the molecular electrostatic potentials (MEPs), a new distributed multipole analysis based on localized functions -- Wannier, Boys, and Edminston-Ruedenberg -- was introduced, which allows for a first principles calculation of the partial charges and multipoles. Through a suitable generalization of the particle mesh Ewald (PME) and multigrid method, one can treat electrostatic multipoles all the way to hexadecapoles all without prohibitive extra costs. The importance of these methods for large-scale simulations will be discussed, and examplified by simulations from polarizable DNA models.
Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods
Grossman, Mark W.; George, William A.
1987-01-01
A process for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H.sub.2 O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg.sub.2 Cl.sub.2. The method for doing this involves dissolving a precise amount of Hg.sub.2 Cl.sub.2 in an electrolyte solution comprised of concentrated HCl and H.sub.2 O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg.
Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods
Grossman, M.W.; George, W.A.
1987-07-07
A process is described for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H[sub 2]O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg[sub 2]Cl[sub 2]. The method for doing this involves dissolving a precise amount of Hg[sub 2]Cl[sub 2] in an electrolyte solution comprised of concentrated HCl and H[sub 2]O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg. 1 fig.
Open lung biopsy: a safe, reliable and accurate method for diagnosis in diffuse lung disease.
Shah, S S; Tsang, V; Goldstraw, P
1992-01-01
The ideal method for obtaining lung tissue for diagnosis should provide high diagnostic yield with low morbidity and mortality. We reviewed all 432 patients (mean age 55 years) who underwent an open lung biopsy at this hospital over a 10-year period. Twenty-four patients (5.5%) were immunocompromised. One hundred and twenty-five patients were on steroid therapy at the time of operation. Open lung biopsy provided a firm diagnosis in 410 cases overall (94.9%) and in 20 out of 24 patients in the immunocompromised group (83.3%). The commonest diagnosis was cryptogenic fibrosing alveolitis (173 patients). Twenty-two patients (5.1%) suffered complications following the procedure: wound infection 11 patients, pneumothorax 9 patients and haemothorax 1 patient. Thirteen patients (3.0%) died following open lung biopsy, but in only 1 patient was the death attributable to the procedure itself. We conclude that open lung biopsy is an accurate and safe method for establishing a diagnosis in diffuse lung disease with a high yield and minimal risk.
Mackie, Iain D; Dilabio, Gino A
2010-06-21
B971, PBE and PBE1 density functionals with 6-31G(d) basis sets are shown to accurately describe the binding in dispersion bound dimers. This is achieved through the use of dispersion-correcting potentials (DCPs) in conjunction with counterpoise corrections. DCPs resemble and are applied like conventional effective core potentials that can be used with most computational chemistry programs without code modification. Rather, DCPs are implemented by simple appendage to the input files for these types of programs. Binding energies are predicted to within ca. 11% and monomer separations to within ca. 0.06 A of high-level wavefunction data using B971/6-31G(d)-DCP. Similar results are obtained for PBE and PBE1 with the 6-31G(d) basis sets and DCPs. Although results found using the 3-21G(d) are not as impressive, they never-the-less show promise as a means of initial study for a wide variety of dimers, including those dominated by dispersion, hydrogen-bonding and a mixture of interactions. Notable improvement is found in comparison to M06-2X/6-31G(d) data, e.g., mean absolute deviations for the S22-set of dimers of ca. 13.6 and 16.5% for B971/6-31G(d)-DCP and M06-2X, respectively. However, it should be pointed out that the latter data were obtained using a larger integration grid size since a smaller grid results in different binding energies and geometries for simple dispersion-bound dimers such as methane and ethene.
Shu, Yu-Chen; Chern, I-Liang; Chang, Chien C.
2014-10-15
Most elliptic interface solvers become complicated for complex interface problems at those “exceptional points” where there are not enough neighboring interior points for high order interpolation. Such complication increases especially in three dimensions. Usually, the solvers are thus reduced to low order accuracy. In this paper, we classify these exceptional points and propose two recipes to maintain order of accuracy there, aiming at improving the previous coupling interface method [26]. Yet the idea is also applicable to other interface solvers. The main idea is to have at least first order approximations for second order derivatives at those exceptional points. Recipe 1 is to use the finite difference approximation for the second order derivatives at a nearby interior grid point, whenever this is possible. Recipe 2 is to flip domain signatures and introduce a ghost state so that a second-order method can be applied. This ghost state is a smooth extension of the solution at the exceptional point from the other side of the interface. The original state is recovered by a post-processing using nearby states and jump conditions. The choice of recipes is determined by a classification scheme of the exceptional points. The method renders the solution and its gradient uniformly second-order accurate in the entire computed domain. Numerical examples are provided to illustrate the second order accuracy of the presently proposed method in approximating the gradients of the original states for some complex interfaces which we had tested previous in two and three dimensions, and a real molecule ( (1D63)) which is double-helix shape and composed of hundreds of atoms.
Methods for accurate estimation of net discharge in a tidal channel
Simpson, M.R.; Bland, R.
2000-01-01
Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three
Li, Xiaowei; Liu, Taigang; Tao, Peiying; Wang, Chunhua; Chen, Lanming
2015-12-01
Structural class characterizes the overall folding type of a protein or its domain. Many methods have been proposed to improve the prediction accuracy of protein structural class in recent years, but it is still a challenge for the low-similarity sequences. In this study, we introduce a feature extraction technique based on auto cross covariance (ACC) transformation of position-specific score matrix (PSSM) to represent a protein sequence. Then support vector machine-recursive feature elimination (SVM-RFE) is adopted to select top K features according to their importance and these features are input to a support vector machine (SVM) to conduct the prediction. Performance evaluation of the proposed method is performed using the jackknife test on three low-similarity datasets, i.e., D640, 1189 and 25PDB. By means of this method, the overall accuracies of 97.2%, 96.2%, and 93.3% are achieved on these three datasets, which are higher than those of most existing methods. This suggests that the proposed method could serve as a very cost-effective tool for predicting protein structural class especially for low-similarity datasets.
FAMBE-pH: a fast and accurate method to compute the total solvation free energies of proteins.
Vorobjev, Yury N; Vila, Jorge A; Scheraga, Harold A
2008-09-04
A fast and accurate method to compute the total solvation free energies of proteins as a function of pH is presented. The method makes use of a combination of approaches, some of which have already appeared in the literature; (i) the Poisson equation is solved with an optimized fast adaptive multigrid boundary element (FAMBE) method; (ii) the electrostatic free energies of the ionizable sites are calculated for their neutral and charged states by using a detailed model of atomic charges; (iii) a set of optimal atomic radii is used to define a precise dielectric surface interface; (iv) a multilevel adaptive tessellation of this dielectric surface interface is achieved by using multisized boundary elements; and (v) 1:1 salt effects are included. The equilibrium proton binding/release is calculated with the Tanford-Schellman integral if the proteins contain more than approximately 20-25 ionizable groups; for a smaller number of ionizable groups, the ionization partition function is calculated directly. The FAMBE method is tested as a function of pH (FAMBE-pH) with three proteins, namely, bovine pancreatic trypsin inhibitor (BPTI), hen egg white lysozyme (HEWL), and bovine pancreatic ribonuclease A (RNaseA). The results are (a) the FAMBE-pH method reproduces the observed pK a's of the ionizable groups of these proteins within an average absolute value of 0.4 p K units and a maximum error of 1.2 p K units and (b) comparison of the calculated total pH-dependent solvation free energy for BPTI, between the exact calculation of the ionization partition function and the Tanford-Schellman integral method, shows agreement within 1.2 kcal/mol. These results indicate that calculation of total solvation free energies with the FAMBE-pH method can provide an accurate prediction of protein conformational stability at a given fixed pH and, if coupled with molecular mechanics or molecular dynamics methods, can also be used for more realistic studies of protein folding, unfolding, and
Discontinuous Galerkin method for predicting heat transfer in hypersonic environments
NASA Astrophysics Data System (ADS)
Ching, Eric; Lv, Yu; Ihme, Matthias
2016-11-01
This study is concerned with predicting surface heat transfer in hypersonic flows using high-order discontinuous Galerkin methods. A robust and accurate shock capturing method designed for steady calculations that uses smooth artificial viscosity for shock stabilization is developed. To eliminate parametric dependence, an optimization method is formulated that results in the least amount of artificial viscosity necessary to sufficiently suppress nonlinear instabilities and achieve steady-state convergence. Performance is evaluated in two canonical hypersonic tests, namely a flow over a circular half-cylinder and flow over a double cone. Results show this methodology to be significantly less sensitive than conventional finite-volume techniques to mesh topology and inviscid flux function. The method is benchmarked against state-of-the-art finite-volume solvers to quantify computational cost and accuracy. Financial support from a Stanford Graduate Fellowship and the NASA Early Career Faculty program are gratefully acknowledged.
Predictions of Thrombus Formation Using Lattice Boltzmann Method
NASA Astrophysics Data System (ADS)
Tamagawa, Masaaki; Matsuo, Sumiaki
This paper describes the prediction of index of thrombus formation in shear blood flow by computational fluid dynamics (CFD) with Lattice Boltzmann Method (LBM), applying to orifice-pipe blood flow and flow around a cylinder, which is simple model of turbulent shear stress in the high speed rotary blood pumps and complicated geometry of medical fluid machines. The results of the flow field in the orifice-pipe flow using LBM are compared with experimental data and those using finite difference method, and it is found that the reattachment length of the backward facing step flow is predicted as precise as that the experiment and the finite difference method. As for thrombus formation, from the computational data of flow around the cylinder in the channel, the thrombus formation (thickness) is estimated using (1) shear rate and adhesion force (effective distance) to the wall independently, and (2) shear rate function with adhesion force (effective distance), and it is found that the prediction method using shear rate function with adhesion force is more accurate than the method using the former one.
NASA Astrophysics Data System (ADS)
Marom, Noa; Knight, Joseph; Wang, Xiaopeng; Gallandi, Lukas; Dolgounitcheva, Olga; Ren, Xinguo; Ortiz, Vincent; Rinke, Patrick; Korzdorfer, Thomas
The performance of different GW methods is assessed for a set of 24 organic acceptors. Errors are evaluated with respect to coupled cluster singles, doubles, perturbative triples [CCSD(T)] reference data for the vertical ionization potentials (IPs) and electron affinities (EAs), extrapolated to the complete basis set limit. Additional comparisons are made to experimental data, where available. We consider fully self-consistent GW (scGW), partial self-consistency in the Green's function (scGW0) , non-self-consistent G0W0 based on several mean-field starting points, and a ``beyond GW'' second order screened exchange (SOSEX) correction to G0W0. The best performers overall are G0W0 + SOSEX and G0W0 based on an IP-tuned long range corrected hybrid functional with the former being more accurate for EAs and the latter for IPs. Both provide a balanced treatment of localized vs. delocalized states and valence spectra in good agreement with photoemission spectroscopy (PES) experiments.
McKellop, H; Clarke, I C; Markolf, K L; Amstutz, H C
1978-11-01
The wear of UHMW polyethylene bearing against 316 stainless steel or cobalt chrome alloy was measured using a 12-channel wear tester especially developed for the evaluation of candidate materials for prosthetic joints. The coefficient of friction and wear rate was determined as a function of lubricant, contact stress, and metallic surface roughness in tests lasting two to three million cycles, the equivalent of several years' use of a prosthesis. Wear was determined from the weight loss of the polyethylene specimens corrected for the effect of fluid absorption. The friction and wear processes in blood serum differed markedly from those in saline solution or distilled water. Only serum lubrication produced wear surfaces resembling those observed on removed prostheses. The experimental method provided a very accurate reproducible measurement of polyethylene wear. The long-term wear rates were proportional to load and sliding distance and were much lower than expected from previously published data. Although the polyethylene wear rate increased with increasing surface roughness, wear was not severe except with very coarse metal surfaces. The data obtained in these studies forms a basis for the subsequent comparative evaluation of potentially superior materials for prosthetic joints.
Method for accurate sizing of pulmonary vessels from 3D medical images
NASA Astrophysics Data System (ADS)
O'Dell, Walter G.
2015-03-01
Detailed characterization of vascular anatomy, in particular the quantification of changes in the distribution of vessel sizes and of vascular pruning, is essential for the diagnosis and management of a variety of pulmonary vascular diseases and for the care of cancer survivors who have received radiation to the thorax. Clinical estimates of vessel radii are typically based on setting a pixel intensity threshold and counting how many "On" pixels are present across the vessel cross-section. A more objective approach introduced recently involves fitting the image with a library of spherical Gaussian filters and utilizing the size of the best matching filter as the estimate of vessel diameter. However, both these approaches have significant accuracy limitations including mis-match between a Gaussian intensity distribution and that of real vessels. Here we introduce and demonstrate a novel approach for accurate vessel sizing using 3D appearance models of a tubular structure along a curvilinear trajectory in 3D space. The vessel branch trajectories are represented with cubic Hermite splines and the tubular branch surfaces represented as a finite element surface mesh. An iterative parameter adjustment scheme is employed to optimally match the appearance models to a patient's chest X-ray computed tomography (CT) scan to generate estimates for branch radii and trajectories with subpixel resolution. The method is demonstrated on pulmonary vasculature in an adult human CT scan, and on 2D simulated test cases.
NASA Astrophysics Data System (ADS)
Barenbrug, Theo M. A. O. M.; Peters, E. A. J. F. (Frank); Schieber, Jay D.
2002-11-01
In Brownian Dynamics simulations, the diffusive motion of the particles is simulated by adding random displacements, proportional to the square root of the chosen time step. When computing average quantities, these Brownian contributions usually average out, and the overall simulation error becomes proportional to the time step. A special situation arises if the particles undergo hard-body interactions that instantaneously change their properties, as in absorption or association processes, chemical reactions, etc. The common "naı̈ve simulation method" accounts for these interactions by checking for hard-body overlaps after every time step. Due to the simplification of the diffusive motion, a substantial part of the actual hard-body interactions is not detected by this method, resulting in an overall simulation error proportional to the square root of the time step. In this paper we take the hard-body interactions during the time step interval into account, using the relative positions of the particles at the beginning and at the end of the time step, as provided by the naı̈ve method, and the analytical solution for the diffusion of a point particle around an absorbing sphere. Öttinger used a similar approach for the one-dimensional case [Stochastic Processes in Polymeric Fluids (Springer, Berlin, 1996), p. 270]. We applied the "corrected simulation method" to the case of a simple, second-order chemical reaction. The results agree with recent theoretical predictions [K. Hyojoon and Joe S. Kook, Phys. Rev. E 61, 3426 (2000)]. The obtained simulation error is proportional to the time step, instead of its square root. The new method needs substantially less simulation time to obtain the same accuracy. Finally, we briefly discuss a straightforward way to extend the method for simulations of systems with additional (deterministic) forces.
2013-08-01
24 Figure 20. ABQ experiment showing five volunteers located 1.0 m from source in upper-left panel wearing...study (Royster et al.,1996) in which users self-fit hearing protectors (ANSI S12.6- 2008 method B: user fit) with no experimenter instruction gives an...values provided by the experimenters and simulator fits for the intact and modified muffs. Figure 22 (upper panel) shows the simulator prediction
Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.
2015-01-01
database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887
Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J
2015-09-30
database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.
Modern Prediction Methods for Turbomachine Performance
1976-01-01
the Consultan’. and Exechange Programme. Propulsion system development costs may be significantly reduced by improvement of methods tor prediction of...of Science and Technology Ames, Iowa 50011 United States of America Aircraft propulsion system development time and cost could be significantly reduced...information is required about things like %tart up performance, wind - milling and altitude light up capability, rapid thrust changes etc. The
Fast, accurate and easy-to-pipeline methods for amplicon sequence processing
NASA Astrophysics Data System (ADS)
Antonielli, Livio; Sessitsch, Angela
2016-04-01
Next generation sequencing (NGS) technologies established since years as an essential resource in microbiology. While on the one hand metagenomic studies can benefit from the continuously increasing throughput of the Illumina (Solexa) technology, on the other hand the spreading of third generation sequencing technologies (PacBio, Oxford Nanopore) are getting whole genome sequencing beyond the assembly of fragmented draft genomes, making it now possible to finish bacterial genomes even without short read correction. Besides (meta)genomic analysis next-gen amplicon sequencing is still fundamental for microbial studies. Amplicon sequencing of the 16S rRNA gene and ITS (Internal Transcribed Spacer) remains a well-established widespread method for a multitude of different purposes concerning the identification and comparison of archaeal/bacterial (16S rRNA gene) and fungal (ITS) communities occurring in diverse environments. Numerous different pipelines have been developed in order to process NGS-derived amplicon sequences, among which Mothur, QIIME and USEARCH are the most well-known and cited ones. The entire process from initial raw sequence data through read error correction, paired-end read assembly, primer stripping, quality filtering, clustering, OTU taxonomic classification and BIOM table rarefaction as well as alternative "normalization" methods will be addressed. An effective and accurate strategy will be presented using the state-of-the-art bioinformatic tools and the example of a straightforward one-script pipeline for 16S rRNA gene or ITS MiSeq amplicon sequencing will be provided. Finally, instructions on how to automatically retrieve nucleotide sequences from NCBI and therefore apply the pipeline to targets other than 16S rRNA gene (Greengenes, SILVA) and ITS (UNITE) will be discussed.
NASA Astrophysics Data System (ADS)
Westendorp, Hendrik; Nuver, Tonnis T.; Moerland, Marinus A.; Minken, André W.
2015-10-01
The geometry of a permanent prostate implant varies over time. Seeds can migrate and edema of the prostate affects the position of seeds. Seed movements directly influence dosimetry which relates to treatment quality. We present a method that tracks all individual seeds over time allowing quantification of seed movements. This linking procedure was tested on transrectal ultrasound (TRUS) and cone-beam CT (CBCT) datasets of 699 patients. These datasets were acquired intraoperatively during a dynamic implantation procedure, that combines both imaging modalities. The procedure was subdivided in four automatic linking steps. (I) The Hungarian Algorithm was applied to initially link seeds in CBCT and the corresponding TRUS datasets. (II) Strands were identified and optimized based on curvature and linefits: non optimal links were removed. (III) The positions of unlinked seeds were reviewed and were linked to incomplete strands if within curvature- and distance-thresholds. (IV) Finally, seeds close to strands were linked, also if the curvature-threshold was violated. After linking the seeds an affine transformation was applied. The procedure was repeated until the results were stable or the 6th iteration ended. All results were visually reviewed for mismatches and uncertainties. Eleven implants showed a mismatch and in 12 cases an uncertainty was identified. On average the linking procedure took 42 ms per case. This accurate and fast method has the potential to be used for other time spans, like Day 30, and other imaging modalities. It can potentially be used during a dynamic implantation procedure to faster and better evaluate the quality of the permanent prostate implant.
Baldassi, Carlo; Zamparo, Marco; Feinauer, Christoph; Procaccini, Andrea; Zecchina, Riccardo; Weigt, Martin; Pagnani, Andrea
2014-01-01
In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation) have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids), exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i) the prediction of residue-residue contacts in proteins, and (ii) the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code.
Assessment of a high-order accurate Discontinuous Galerkin method for turbomachinery flows
NASA Astrophysics Data System (ADS)
Bassi, F.; Botti, L.; Colombo, A.; Crivellini, A.; Franchina, N.; Ghidoni, A.
2016-04-01
In this work the capabilities of a high-order Discontinuous Galerkin (DG) method applied to the computation of turbomachinery flows are investigated. The Reynolds averaged Navier-Stokes equations coupled with the two equations k-ω turbulence model are solved to predict the flow features, either in a fixed or rotating reference frame, to simulate the fluid flow around bodies that operate under an imposed steady rotation. To ensure, by design, the positivity of all thermodynamic variables at a discrete level, a set of primitive variables based on pressure and temperature logarithms is used. The flow fields through the MTU T106A low-pressure turbine cascade and the NASA Rotor 37 axial compressor have been computed up to fourth-order of accuracy and compared to the experimental and numerical data available in the literature.
NASA Astrophysics Data System (ADS)
Pau, George Shu Heng; Shen, Chaopeng; Riley, William J.; Liu, Yaning
2016-02-01
The topography, and the biotic and abiotic parameters are typically upscaled to make watershed-scale hydrologic-biogeochemical models computationally tractable. However, upscaling procedure can produce biases when nonlinear interactions between different processes are not fully captured at coarse resolutions. Here we applied the Proper Orthogonal Decomposition Mapping Method (PODMM) to downscale the field solutions from a coarse (7 km) resolution grid to a fine (220 m) resolution grid. PODMM trains a reduced-order model (ROM) with coarse-resolution and fine-resolution solutions, here obtained using PAWS+CLM, a quasi-3-D watershed processes model that has been validated for many temperate watersheds. Subsequent fine-resolution solutions were approximated based only on coarse-resolution solutions and the ROM. The approximation errors were efficiently quantified using an error estimator. By jointly estimating correlated variables and temporally varying the ROM parameters, we further reduced the approximation errors by up to 20%. We also improved the method's robustness by constructing multiple ROMs using different set of variables, and selecting the best approximation based on the error estimator. The ROMs produced accurate downscaling of soil moisture, latent heat flux, and net primary production with O(1000) reduction in computational cost. The subgrid distributions were also nearly indistinguishable from the ones obtained using the fine-resolution model. Compared to coarse-resolution solutions, biases in upscaled ROM solutions were reduced by up to 80%. This method has the potential to help address the long-standing spatial scaling problem in hydrology and enable long-time integration, parameter estimation, and stochastic uncertainty analysis while accurately representing the heterogeneities.
NASA Astrophysics Data System (ADS)
Gu, F.; Wang, T.; Alwodai, A.; Tian, X.; Shao, Y.; Ball, A. D.
2015-01-01
Motor current signature analysis (MCSA) has been an effective way of monitoring electrical machines for many years. However, inadequate accuracy in diagnosing incipient broken rotor bars (BRB) has motivated many studies into improving this method. In this paper a modulation signal bispectrum (MSB) analysis is applied to motor currents from different broken bar cases and a new MSB based sideband estimator (MSB-SE) and sideband amplitude estimator are introduced for obtaining the amplitude at (1 ± 2 s)fs (s is the rotor slip and fs is the fundamental supply frequency) with high accuracy. As the MSB-SE has a good performance of noise suppression, the new estimator produces more accurate results in predicting the number of BRB, compared with conventional power spectrum analysis. Moreover, the paper has also developed an improved model for motor current signals under rotor fault conditions and an effective method to decouple the BRB current which interferes with that of speed oscillations associated with BRB. These provide theoretical supports for the new estimators and clarify the issues in using conventional bispectrum analysis.
Arcon, Juan Pablo; Defelipe, Lucas A; Modenutti, Carlos P; López, Elias D; Alvarez-Garcia, Daniel; Barril, Xavier; Turjanski, Adrián G; Martí, Marcelo A
2017-03-31
One of the most important biological processes at the molecular level is the formation of protein-ligand complexes. Therefore, determining their structure and underlying key interactions is of paramount relevance and has direct applications in drug development. Because of its low cost relative to its experimental sibling, molecular dynamics (MD) simulations in the presence of different solvent probes mimicking specific types of interactions have been increasingly used to analyze protein binding sites and reveal protein-ligand interaction hot spots. However, a systematic comparison of different probes and their real predictive power from a quantitative and thermodynamic point of view is still missing. In the present work, we have performed MD simulations of 18 different proteins in pure water as well as water mixtures of ethanol, acetamide, acetonitrile and methylammonium acetate, leading to a total of 5.4 μs simulation time. For each system, we determined the corresponding solvent sites, defined as space regions adjacent to the protein surface where the probability of finding a probe atom is higher than that in the bulk solvent. Finally, we compared the identified solvent sites with 121 different protein-ligand complexes and used them to perform molecular docking and ligand binding free energy estimates. Our results show that combining solely water and ethanol sites allows sampling over 70% of all possible protein-ligand interactions, especially those that coincide with ligand-based pharmacophoric points. Most important, we also show how the solvent sites can be used to significantly improve ligand docking in terms of both accuracy and precision, and that accurate predictions of ligand binding free energies, along with relative ranking of ligand affinity, can be performed.
Karwath, Andreas; Clare, Amanda; Dehaspe, Luc
2000-01-01
The analysis of genomics data needs to become as automated as its generation. Here we present a novel data-mining approach to predicting protein functional class from sequence. This method is based on a combination of inductive logic programming clustering and rule learning. We demonstrate the effectiveness of this approach on the M. tuberculosis and E. coli genomes, and identify biologically interpretable rules which predict protein functional class from information only available from the sequence. These rules predict 65% of the ORFs with no assigned function in M. tuberculosis and 24% of those in E. coli, with an estimated accuracy of 60–80% (depending on the level of functional assignment). The rules are founded on a combination of detection of remote homology, convergent evolution and horizontal gene transfer. We identify rules that predict protein functional class even in the absence of detectable sequence or structural homology. These rules give insight into the evolutionary history of M. tuberculosis and E. coli. PMID:11119305
A numerical method for predicting hypersonic flowfields
NASA Technical Reports Server (NTRS)
Maccormack, Robert W.; Candler, Graham V.
1989-01-01
The flow about a body traveling at hypersonic speed is energetic enough to cause the atmospheric gases to chemically react and reach states in thermal nonequilibrium. The prediction of hypersonic flowfields requires a numerical method capable of solving the conservation equations of fluid flow, the chemical rate equations for specie formation and dissociation, and the transfer of energy relations between translational and vibrational temperature states. Because the number of equations to be solved is large, the numerical method should also be as efficient as possible. The proposed paper presents a fully implicit method that fully couples the solution of the fluid flow equations with the gas physics and chemistry relations. The method flux splits the inviscid flow terms, central differences of the viscous terms, preserves element conservation in the strong chemistry source terms, and solves the resulting block matrix equation by Gauss Seidel line relaxation.
Webb-Robertson, Bobbie-Jo M.; Cannon, William R.; Oehmen, Christopher S.; Shah, Anuj R.; Gurumoorthi, Vidhya; Lipton, Mary S.; Waters, Katrina M.
2008-07-01
Motivation: The standard approach to identifying peptides based on accurate mass and elution time (AMT) compares these profiles obtained from a high resolution mass spectrometer to a database of peptides previously identified from tandem mass spectrometry (MS/MS) studies. It would be advantageous, with respect to both accuracy and cost, to only search for those peptides that are detectable by MS (proteotypic). Results: We present a Support Vector Machine (SVM) model that uses a simple descriptor space based on 35 properties of amino acid content, charge, hydrophilicity, and polarity for the quantitative prediction of proteotypic peptides. Using three independently derived AMT databases (Shewanella oneidensis, Salmonella typhimurium, Yersinia pestis) for training and validation within and across species, the SVM resulted in an average accuracy measure of ~0.8 with a standard deviation of less than 0.025. Furthermore, we demonstrate that these results are achievable with a small set of 12 variables and can achieve high proteome coverage. Availability: http://omics.pnl.gov/software/STEPP.php
Du, Qi-Shi; Wang, Cheng-Hua; Wang, Yu-Ting; Huang, Ri-Bo
2010-04-01
The electrostatic potential (ESP) is an important property of interactions within and between macromolecules, including those of importance in the life sciences. Semiempirical quantum chemical methods and classical Coulomb calculations fail to provide even qualitative ESP for many of these biomolecules. A new empirical ESP calculation method, namely, EM-ESP, is developed in this study, in which the traditional approach of point atomic charges and the classical Coulomb equation is discarded. In its place, the EM-ESP generates a three-dimensional electrostatic potential V(EM)(r) in molecular space that is the sum of contributions from all component atoms. The contribution of an atom k is formulated as a Gaussian function g(r(k);alpha(k),beta(k)) = alpha(k)/r(k)(betak) with two parameters (alpha(k) and beta(k)). The benchmark for the parameter optimization is the ESP obtained by using higher-level quantum chemical approaches (e.g., CCSD/TZVP). A set of atom-based parameters is optimized in a training set of common organic molecules. Calculated examples demonstrate that the EM-ESP approach is a vast improvement over the Coulombic approach in producing the molecular ESP contours that are comparable to the results obtained with higher-level quantum chemical methods. The atom-based parameters are shown to be transferrable between one part of closely related aromatic molecules. The atom-based ESP formulization and parametrization strategy can be extended to biological macromolecules, such as proteins, DNA, and RNA molecules. Since ESP is frequently used to rationalize and predict intermolecular interactions, we expect that the EM-ESP method will have important applications for studies of protein-ligand and protein-protein interactions in numerous areas of chemistry, molecular biology, and other life sciences.
Zhang, Jin-Feng; Chen, Yao; Lin, Guo-Shi; Zhang, Jian-Dong; Tang, Wen-Long; Huang, Jian-Huang; Chen, Jin-Shou; Wang, Xing-Fu; Lin, Zhi-Xiong
2016-06-01
Interferon-induced protein with tetratricopeptide repeat 1 (IFIT1) plays a key role in growth suppression and apoptosis promotion in cancer cells. Interferon was reported to induce the expression of IFIT1 and inhibit the expression of O-6-methylguanine-DNA methyltransferase (MGMT).This study aimed to investigate the expression of IFIT1, the correlation between IFIT1 and MGMT, and their impact on the clinical outcome in newly diagnosed glioblastoma. The expression of IFIT1 and MGMT and their correlation were investigated in the tumor tissues from 70 patients with newly diagnosed glioblastoma. The effects on progression-free survival and overall survival were evaluated. Of 70 cases, 57 (81.4%) tissue samples showed high expression of IFIT1 by immunostaining. The χ(2) test indicated that the expression of IFIT1 and MGMT was negatively correlated (r = -0.288, P = .016). Univariate and multivariate analyses confirmed high IFIT1 expression as a favorable prognostic indicator for progression-free survival (P = .005 and .017) and overall survival (P = .001 and .001), respectively. Patients with 2 favorable factors (high IFIT1 and low MGMT) had an improved prognosis as compared with others. The results demonstrated significantly increased expression of IFIT1 in newly diagnosed glioblastoma tissue. The negative correlation between IFIT1 and MGMT expression may be triggered by interferon. High IFIT1 can be a predictive biomarker of favorable clinical outcome, and IFIT1 along with MGMT more accurately predicts prognosis in newly diagnosed glioblastoma.
A Versatile Nonlinear Method for Predictive Modeling
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Yao, Weigang
2015-01-01
As computational fluid dynamics techniques and tools become widely accepted for realworld practice today, it is intriguing to ask: what areas can it be utilized to its potential in the future. Some promising areas include design optimization and exploration of fluid dynamics phenomena (the concept of numerical wind tunnel), in which both have the common feature where some parameters are varied repeatedly and the computation can be costly. We are especially interested in the need for an accurate and efficient approach for handling these applications: (1) capturing complex nonlinear dynamics inherent in a system under consideration and (2) versatility (robustness) to encompass a range of parametric variations. In our previous paper, we proposed to use first-order Taylor expansion collected at numerous sampling points along a trajectory and assembled together via nonlinear weighting functions. The validity and performance of this approach was demonstrated for a number of problems with a vastly different input functions. In this study, we are especially interested in enhancing the method's accuracy; we extend it to include the second-orer Taylor expansion, which however requires a complicated evaluation of Hessian matrices for a system of equations, like in fluid dynamics. We propose a method to avoid these Hessian matrices, while maintaining the accuracy. Results based on the method are presented to confirm its validity.
A time accurate prediction of the viscous flow in a turbine stage including a rotor in motion
NASA Astrophysics Data System (ADS)
Shavalikul, Akamol
accurate flow characteristics in the NGV domain and the rotor domain with less computational time and computer memory requirements. In contrast, the time accurate flow simulation can predict all unsteady flow characteristics occurring in the turbine stage, but with high computational resource requirements. (Abstract shortened by UMI.)
Airframe Noise Prediction Using the Sngr Method
NASA Astrophysics Data System (ADS)
Chen, Rongqian; Wu, Yizhao; Xia, Jian
In this paper, the Stochastic Noise Generation and Radiation method (SNGR) is used to predict airframe noise. The SNGR method combines a stochastic model with Computational Fluid Dynamics (CFD), and it can give acceptable noise results while the computation cost is relatively low. In the method, the time-averaged mean flow field is firstly obtained by solving Reynolds Averaged Navier-Stokes equations (RANS), and a stochastic velocity is generated based on the obtained information. Then the turbulent field is used to generate the source for the Acoustic Perturbation Equations (APEs) that simulate the noise propagation. For numerical methods, timeaveraged RANS equations are solved by finite volume method, and the turbulent model is K - ɛ model; APEs are solved by finite difference method, and the numerical scheme is the Dispersion-Relation-Preserving (DRP) scheme, with explicit optimized 5-stage Rung-Kutta scheme time step. In order to test the APE solver, propagation of a Gaussian pulse in a uniform mean flow is firstly simulated and compared with the analytical solution. Then, using the method, the trailing edge noise of NACA0012 airfoil is calculated. The results are compared with reference data, and good agreements are demonstrated.
Computational predictive methods for fracture and fatigue
NASA Technical Reports Server (NTRS)
Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.
1994-01-01
The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.
Computational predictive methods for fracture and fatigue
NASA Astrophysics Data System (ADS)
Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.
1994-09-01
The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.
Samudrala, Ram; Heffron, Fred; McDermott, Jason E.
2009-04-24
The type III secretion system is an essential component for virulence in many Gram-negative bacteria. Though components of the secretion system apparatus are conserved, its substrates, effector proteins, are not. We have used a machine learning approach to identify new secreted effectors. The method integrates evolutionary measures, such as the pattern of homologs in a range of other organisms, and sequence-based features, such as G+C content, amino acid composition and the N-terminal 30 residues of the protein sequence. The method was trained on known effectors from Salmonella typhimurium and validated on a corresponding set of effectors from Pseudomonas syringae, after eliminating effectors with detectable sequence similarity. The method was able to identify all of the known effectors in P. syringae with a specificity of 84% and sensitivity of 82%. The reciprocal validation, training on P. syringae and validating on S. typhimurium, gave similar results with a specificity of 86% when the sensitivity level was 87%. These results show that type III effectors in disparate organisms share common features. We found that maximal performance is attained by including an N-terminal sequence of only 30 residues, which agrees with previous studies indicating that this region contains the secretion signal. We then used the method to define the most important residues in this putative secretion signal. Finally, we present novel predictions of secreted effectors in S. typhimurium, some of which have been experimentally validated, and apply the method to predict secreted effectors in the genetically intractable human pathogen Chlamydia trachomatis. This approach is a novel and effective way to identify secreted effectors in a broad range of pathogenic bacteria for further experimental characterization and provides insight into the nature of the type III secretion signal.
Hemmati, Roholla; Gharipour, Mojgan; Khosravi, Alireza; Jozan, Mahnaz
2013-01-01
Background. The purpose of this study was to answer the question whether a single testing for microalbuminuria results in a reliable conclusion leading costs saving. Methods. This current cross-sectional study included a total of 126 consecutive persons. Microalbuminuria was assessed by collection of two fasting random urine specimens on arrival to the clinic as well as one week later in the morning. Results. In overall, 17 out of 126 participants suffered from microalbuminuria that, among them, 12 subjects were also diagnosed as microalbuminuria once assessing this factor with a sensitivity of 70.6%, a specificity of 100%, a PPV of 100%, a NPV of 95.6%, and an accuracy of 96.0%. The measured sensitivity, specificity, PVV, NPV, and accuracy in hypertensive patients were 73.3%, 100%, 100%, 94.8%, and 95.5%, respectively. Also, these rates in nonhypertensive groups were 50.0%, 100%, 100%, 97.3%, and 97.4%, respectively. According to the ROC curve analysis, a single measurement of UACR had a high value for discriminating defected from normal renal function state (c = 0.989). Urinary albumin concentration in a single measurement had also high discriminative value for diagnosis of damaged kidney (c = 0.995). Conclusion. The single testing of both UACR and urine albumin level rather frequent testing leads to high diagnostic sensitivity, specificity, and accuracy as well as high predictive values in total population and also in hypertensive subgroups.
Accurate treatment of total photoabsorption cross sections by an ab initio time-dependent method
NASA Astrophysics Data System (ADS)
Daud, Mohammad Noh
2014-09-01
A detailed discussion of parallel and perpendicular transitions required for the photoabsorption of a molecule is presented within a time-dependent view. Total photoabsorption cross sections for the first two ultraviolet absorption bands of the N2O molecule corresponding to transitions from the X1 A' state to the 21 A' and 11 A'' states are calculated to test the reliability of the method. By fully considering the property of the electric field polarization vector of the incident light, the method treats the coupling of angular momentum and the parity differently for two kinds of transitions depending on the direction of the vector whether it is: (a) situated parallel in a molecular plane for an electronic transition between states with the same symmetry; (b) situated perpendicular to a molecular plane for an electronic transition between states with different symmetry. Through this, for those transitions, we are able to offer an insightful picture of the dynamics involved and to characterize some new aspects in the photoabsorption process of N2O. Our calculations predicted that the parallel transition to the 21 A' state is the major dissociation pathway which is in qualitative agreement with the experimental observations. Most importantly, a significant improvement in the absolute value of the total cross section over previous theoretical results [R. Schinke, J. Chem. Phys. 134, 064313 (2011), M.N. Daud, G.G. Balint-Kurti, A. Brown, J. Chem. Phys. 122, 054305 (2005), S. Nanbu, M.S. Johnson, J. Phys. Chem. A 108, 8905 (2004)] was obtained.
Accurate treatment of total photoabsorption cross sections by an ab initio time-dependent method
NASA Astrophysics Data System (ADS)
Noh Daud, Mohammad
2014-09-01
A detailed discussion of parallel and perpendicular transitions required for the photoabsorption of a molecule is presented within a time-dependent view. Total photoabsorption cross sections for the first two ultraviolet absorption bands of the N2O molecule corresponding to transitions from the X1A' state to the 21A' and 11A'' states are calculated to test the reliability of the method. By fully considering the property of the electric field polarization vector of the incident light, the method treats the coupling of angular momentum and the parity differently for two kinds of transitions depending on the direction of the vector whether it is: (a) situated parallel in a molecular plane for an electronic transition between states with the same symmetry; (b) situated perpendicular to a molecular plane for an electronic transition between states with different symmetry. Through this, for those transitions, we are able to offer an insightful picture of the dynamics involved and to characterize some new aspects in the photoabsorption process of N2O. Our calculations predicted that the parallel transition to the 21A' state is the major dissociation pathway which is in qualitative agreement with the experimental observations. Most importantly, a significant improvement in the absolute value of the total cross section over previous theoretical results [R. Schinke, J. Chem. Phys. 134, 064313 (2011), M.N. Daud, G.G. Balint-Kurti, A. Brown, J. Chem. Phys. 122, 054305 (2005), S. Nanbu, M.S. Johnson, J. Phys. Chem. A 108, 8905 (2004)] was obtained.
Accuracy assessment of the ERP prediction method based on analysis of 100-year ERP series
NASA Astrophysics Data System (ADS)
Malkin, Z.; Tissen, V. M.
2012-12-01
A new method has been developed at the Siberian Research Institute of Metrology (SNIIM) for highly accurate prediction of UT1 and Pole motion (PM). In this study, a detailed comparison was made of real-time UT1 predictions made in 2006-2011 and PMpredictions made in 2009-2011making use of the SNIIM method with simultaneous predictions computed at the International Earth Rotation and Reference Systems Service (IERS), USNO. Obtained results have shown that proposed method provides better accuracy at different prediction lengths.
Accurate measurement method of Fabry-Perot cavity parameters via optical transfer function
Bondu, Francois; Debieu, Olivier
2007-05-10
It is shown how the transfer function from frequency noise to a Pound-Drever-Hall signal for a Fabry-Perot cavity can be used to accurately measure cavity length, cavity linewidth, mirror curvature, misalignments, laser beam shape mismatching with resonant beam shape, and cavity impedance mismatching with respect to vacuum.
A time-accurate implicit method for chemical non-equilibrium flows at all speeds
NASA Technical Reports Server (NTRS)
Shuen, Jian-Shun
1992-01-01
A new time accurate coupled solution procedure for solving the chemical non-equilibrium Navier-Stokes equations over a wide range of Mach numbers is described. The scheme is shown to be very efficient and robust for flows with velocities ranging from M less than or equal to 10(exp -10) to supersonic speeds.
McCoy, Rajiv C.; Garud, Nandita R.; Kelley, Joanna L.; Boggs, Carol L.; Petrov, Dmitri A.
2015-01-01
The analysis of molecular data from natural populations has allowed researchers to answer diverse ecological questions that were previously intractable. In particular, ecologists are often interested in the demographic history of populations, information that is rarely available from historical records. Methods have been developed to infer demographic parameters from genomic data, but it is not well understood how inferred parameters compare to true population history or depend on aspects of experimental design. Here we present and evaluate a method of SNP discovery using RNA-sequencing and demographic inference using the program δaδi, which uses a diffusion approximation to the allele frequency spectrum to fit demographic models. We test these methods in a population of the checkerspot butterfly Euphydryas gillettii. This population was intentionally introduced to Gothic, Colorado in 1977 and has since experienced extreme fluctuations including bottlenecks of fewer than 25 adults, as documented by nearly annual field surveys. Using RNA-sequencing of eight individuals from Colorado and eight individuals from a native population in Wyoming, we generate the first genomic resources for this system. While demographic inference is commonly used to examine ancient demography, our study demonstrates that our inexpensive, all-in-one approach to marker discovery and genotyping provides sufficient data to accurately infer the timing of a recent bottleneck. This demographic scenario is relevant for many species of conservation concern, few of which have sequenced genomes. Our results are remarkably insensitive to sample size or number of genomic markers, which has important implications for applying this method to other non-model systems. PMID:24237665
Drug permeability prediction using PMF method.
Meng, Fancui; Xu, Weiren
2013-03-01
Drug permeability determines the oral availability of drugs via cellular membranes. Poor permeability makes a drug unsuitable for further development. The permeability may be estimated as the free energy change that the drug should overcome through crossing membrane. In this paper the drug permeability was simulated using molecular dynamics method and the potential energy profile was calculated with potential of mean force (PMF) method. The membrane was simulated using DPPC bilayer and three drugs with different permeability were tested. PMF studies on these three drugs show that doxorubicin (low permeability) should pass higher free energy barrier from water to DPPC bilayer center while ibuprofen (high permeability) has a lower energy barrier. Our calculation indicates that the simulation model we built is suitable to predict drug permeability.
Barone, Veronica; Hod, Oded; Peralta, Juan E; Scuseria, Gustavo E
2011-04-19
by HSE and TPSSh provide excellent agreement with existing photoluminescence and Rayleigh scattering spectroscopy experiments and Green's function-based methods for carbon nanotubes. This same methodology was utilized to predict the properties of other carbon nanomaterials, such as graphene nanoribbons. Graphene nanoribbons may be viewed as unrolled (and passivated) carbon nanotubes. However, the emergence of edges has a crucial impact on the electronic properties of graphene nanoribbons. Our calculations have shown that armchair nanoribbons are predicted to be nonmagnetic semiconductors with a band gap that oscillates with their width. In contrast, zigzag graphene nanoribbons are semiconducting with an electronic ground state that exhibits spin polarization localized at the edges of the carbon nanoribbon. The spatial symmetry of these magnetic states in graphene nanoribbons can give rise to a half-metallic behavior when a transverse external electric field is applied. Our work shows that these properties are enhanced upon different types of oxidation of the edges. We also discuss the properties of rectangular graphene flakes, which present spin polarization localized at the zigzag edges.
ERIC Educational Resources Information Center
Salley, Charles D.
Accurate enrollment forecasts are a prerequisite for reliable budget projections. This is because tuition payments make up a significant portion of a university's revenue, and anticipated revenue is the immediate constraint on current operating expenditures. Accurate forecasts are even more critical to revenue projections when a university's…
NASA Astrophysics Data System (ADS)
Cai, Can-Ying; Zeng, Song-Jun; Liu, Hong-Rong; Yang, Qi-Bin
2008-05-01
A completely different formulation for simulation of the high order Laue zone (HOLZ) diffractions is derived. It refers to the new method, i.e. the Taylor series (TS) method. To check the validity and accuracy of the TS method, we take polyvinglidene fluoride (PVDF) crystal as an example to calculate the exit wavefunction by the conventional multi-slice (CMS) method and the TS method. The calculated results show that the TS method is much more accurate than the CMS method and is independent of the slice thicknesses. Moreover, the pure first order Laue zone wavefunction by the TS method can reflect the major potential distribution of the first reciprocal plane.
An empirical method for prediction of cheese yield.
Melilli, C; Lynch, J M; Carpino, S; Barbano, D M; Licitra, G; Cappa, A
2002-10-01
Theoretical cheese yield can be estimated from the milk fat and casein or protein content of milk using classical formulae, such as the VanSlyke formula. These equations are reliable predictors of theoretical or actual yield based on accurately measured milk fat and casein content. Many cheese makers desire to base payment for milk to dairy farmers on the yield of cheese. In small factories, however, accurate measurement of fat and casein content of milk by either chemical methods or infrared milk analysis is too time consuming and expensive. Therefore, an empirical test to predict cheese yield was developed which uses simple equipment (i.e., clinical centrifuge, analytical balance, and forced air oven) to carry out a miniature cheese making, followed by a gravimetric measurement of dry weight yield. A linear regression of calculated theoretical versus dry weight yields for milks of known fat and casein content was calculated. A regression equation of y = 1.275x + 1.528, where y is theoretical yield and x is measured dry solids yield (r2 = 0.981), for Cheddar cheese was developed using milks with a range of theoretical yield from 7 to 11.8%. The standard deviation of the difference (SDD) between theoretical cheese yield and dry solids yield was 0.194 and the coefficient of variation (SDD/mean x 100) was 1.95% upon cross validation. For cheeses without a well-established theoretical cheese yield equation, the measured dry weight yields could be directly correlated to the observed yields in the factory; this would more accurately reflect the expected yield performance. Payments for milk based on these measurements would more accurately reflect quality and composition of the milk and the actual average recovery of fat and casein achieved under practical cheese making conditions.
Tenon, Mathieu; Feuillère, Nicolas; Roller, Marc; Birtić, Simona
2017-04-15
Yucca GRAS-labelled saponins have been and are increasingly used in food/feed, pharmaceutical or cosmetic industries. Existing techniques presently used for Yucca steroidal saponin quantification remain either inaccurate and misleading or accurate but time consuming and cost prohibitive. The method reported here addresses all of the above challenges. HPLC/ELSD technique is an accurate and reliable method that yields results of appropriate repeatability and reproducibility. This method does not over- or under-estimate levels of steroidal saponins. HPLC/ELSD method does not require each and every pure standard of saponins, to quantify the group of steroidal saponins. The method is a time- and cost-effective technique that is suitable for routine industrial analyses. HPLC/ELSD methods yield a saponin fingerprints specific to the plant species. As the method is capable of distinguishing saponin profiles from taxonomically distant species, it can unravel plant adulteration issues.
Device and method for accurately measuring concentrations of airborne transuranic isotopes
McIsaac, C.V.; Killian, E.W.; Grafwallner, E.G.; Kynaston, R.L.; Johnson, L.O.; Randolph, P.D.
1996-09-03
An alpha continuous air monitor (CAM) with two silicon alpha detectors and three sample collection filters is described. This alpha CAM design provides continuous sampling and also measures the cumulative transuranic (TRU), i.e., plutonium and americium, activity on the filter, and thus provides a more accurate measurement of airborne TRU concentrations than can be accomplished using a single fixed sample collection filter and a single silicon alpha detector. 7 figs.
Device and method for accurately measuring concentrations of airborne transuranic isotopes
McIsaac, Charles V.; Killian, E. Wayne; Grafwallner, Ervin G.; Kynaston, Ronnie L.; Johnson, Larry O.; Randolph, Peter D.
1996-01-01
An alpha continuous air monitor (CAM) with two silicon alpha detectors and three sample collection filters is described. This alpha CAM design provides continuous sampling and also measures the cumulative transuranic (TRU), i.e., plutonium and americium, activity on the filter, and thus provides a more accurate measurement of airborne TRU concentrations than can be accomplished using a single fixed sample collection filter and a single silicon alpha detector.
Phase analysis method for burst onset prediction
NASA Astrophysics Data System (ADS)
Stellino, Flavio; Mazzoni, Alberto; Storace, Marco
2017-02-01
The response of bursting neurons to fluctuating inputs is usually hard to predict, due to their strong nonlinearity. For the same reason, decoding the injected stimulus from the activity of a bursting neuron is generally difficult. In this paper we propose a method describing (for neuron models) a mechanism of phase coding relating the burst onsets with the phase profile of the input current. This relation suggests that burst onset may provide a way for postsynaptic neurons to track the input phase. Moreover, we define a method of phase decoding to solve the inverse problem and estimate the likelihood of burst onset given the input state. Both methods are presented here in a unified framework, describing a complete coding-decoding procedure. This procedure is tested by using different neuron models, stimulated with different inputs (stochastic, sinusoidal, up, and down states). The results obtained show the efficacy and broad range of application of the proposed methods. Possible applications range from the study of sensory information processing, in which phase-of-firing codes are known to play a crucial role, to clinical applications such as deep brain stimulation, helping to design stimuli in order to trigger or prevent neural bursting.
Ferrari, Clarissa; Uher, Rudolf; Bocchio-Chiavetto, Luisella; Riva, Marco Andrea; Pariante, Carmine M.
2016-01-01
Background: Increased levels of inflammation have been associated with a poorer response to antidepressants in several clinical samples, but these findings have had been limited by low reproducibility of biomarker assays across laboratories, difficulty in predicting response probability on an individual basis, and unclear molecular mechanisms. Methods: Here we measured absolute mRNA values (a reliable quantitation of number of molecules) of Macrophage Migration Inhibitory Factor and interleukin-1β in a previously published sample from a randomized controlled trial comparing escitalopram vs nortriptyline (GENDEP) as well as in an independent, naturalistic replication sample. We then used linear discriminant analysis to calculate mRNA values cutoffs that best discriminated between responders and nonresponders after 12 weeks of antidepressants. As Macrophage Migration Inhibitory Factor and interleukin-1β might be involved in different pathways, we constructed a protein-protein interaction network by the Search Tool for the Retrieval of Interacting Genes/Proteins. Results: We identified cutoff values for the absolute mRNA measures that accurately predicted response probability on an individual basis, with positive predictive values and specificity for nonresponders of 100% in both samples (negative predictive value=82% to 85%, sensitivity=52% to 61%). Using network analysis, we identified different clusters of targets for these 2 cytokines, with Macrophage Migration Inhibitory Factor interacting predominantly with pathways involved in neurogenesis, neuroplasticity, and cell proliferation, and interleukin-1β interacting predominantly with pathways involved in the inflammasome complex, oxidative stress, and neurodegeneration. Conclusion: We believe that these data provide a clinically suitable approach to the personalization of antidepressant therapy: patients who have absolute mRNA values above the suggested cutoffs could be directed toward earlier access to more
Schwoertzig, Eugénie; Millon, Alexandre
2016-01-01
Species occurrence data provide crucial information for biodiversity studies in the current context of global environmental changes. Such studies often rely on a limited number of occurrence data collected in the field and on pseudo-absences arbitrarily chosen within the study area, which reduces the value of these studies. To overcome this issue, we propose an alternative method of prospection using geo-located street view imagery (SVI). Following a standardised protocol of virtual prospection using both vertical (aerial photographs) and horizontal (SVI) perceptions, we have surveyed 1097 randomly selected cells across Spain (0.1x0.1 degree, i.e. 20% of Spain) for the presence of Arundo donax L. (Poaceae). In total we have detected A. donax in 345 cells, thus substantially expanding beyond the now two-centuries-old field-derived record, which described A. donax only 216 cells. Among the field occurrence cells, 81.1% were confirmed by SVI prospection to be consistent with species presence. In addition, we recorded, by SVI prospection, 752 absences, i.e. cells where A. donax was considered absent. We have also compared the outcomes of climatic niche modeling based on SVI data against those based on field data. Using generalized linear models fitted with bioclimatic predictors, we have found SVI data to provide far more compelling results in terms of niche modeling than does field data as classically used in SDM. This original, cost- and time-effective method provides the means to accurately locate highly visible taxa, reinforce absence data, and predict species distribution without long and expensive in situ prospection. At this time, the majority of available SVI data is restricted to human-disturbed environments that have road networks. However, SVI is becoming increasingly available in natural areas, which means the technique has considerable potential to become an important factor in future biodiversity studies. PMID:26751565
StructBoost: Boosting Methods for Predicting Structured Output Variables.
Chunhua Shen; Guosheng Lin; van den Hengel, Anton
2014-10-01
Boosting is a method for learning a single accurate predictor by linearly combining a set of less accurate weak learners. Recently, structured learning has found many applications in computer vision. Inspired by structured support vector machines (SSVM), here we propose a new boosting algorithm for structured output prediction, which we refer to as StructBoost. StructBoost supports nonlinear structured learning by combining a set of weak structured learners. As SSVM generalizes SVM, our StructBoost generalizes standard boosting approaches such as AdaBoost, or LPBoost to structured learning. The resulting optimization problem of StructBoost is more challenging than SSVM in the sense that it may involve exponentially many variables and constraints. In contrast, for SSVM one usually has an exponential number of constraints and a cutting-plane method is used. In order to efficiently solve StructBoost, we formulate an equivalent 1-slack formulation and solve it using a combination of cutting planes and column generation. We show the versatility and usefulness of StructBoost on a range of problems such as optimizing the tree loss for hierarchical multi-class classification, optimizing the Pascal overlap criterion for robust visual tracking and learning conditional random field parameters for image segmentation.
Systems and methods for predicting materials properties
Ceder, Gerbrand; Fischer, Chris; Tibbetts, Kevin; Morgan, Dane; Curtarolo, Stefano
2007-11-06
Systems and methods for predicting features of materials of interest. Reference data are analyzed to deduce relationships between the input data sets and output data sets. Reference data includes measured values and/or computed values. The deduced relationships can be specified as equations, correspondences, and/or algorithmic processes that produce appropriate output data when suitable input data is used. In some instances, the output data set is a subset of the input data set, and computational results may be refined by optionally iterating the computational procedure. To deduce features of a new material of interest, a computed or measured input property of the material is provided to an equation, correspondence, or algorithmic procedure previously deduced, and an output is obtained. In some instances, the output is iteratively refined. In some instances, new features deduced for the material of interest are added to a database of input and output data for known materials.
A method to predict circulation control noise
NASA Astrophysics Data System (ADS)
Reger, Robert W.
Underwater vehicles suffer from reduced maneuverability with conventional lifting append-\\ ages due to the low velocity of operation. Circulation control offers a method to increase maneuverability independent of vehicle speed. However, with circulation control comes additional noise sources, which are not well understood. To better understand these noise sources, a modal-based prediction method is developed, potentially offering a quantitative connection between flow structures and far-field noise. This method involves estimation of the velocity field, surface pressure field, and far-field noise, using only non-time-resolved velocity fields and time-resolved probe measurements. Proper orthogonal decomposition, linear stochastic estimation and Kalman smoothing are employed to estimate time-resolved velocity fields. Poisson's equation is used to calculate time-resolved pressure fields from velocity. Curle's analogy is then used to propagate the surface pressure forces to the far field. This method is developed on a direct numerical simulation of a two-dimensional cylinder at a low Reynolds number (150). Since each of the fields to be estimated are also known from the simulation, a means of obtaining the error from using the methodology is provided. The velocity estimation and the simulated velocity match well when the simulated additive measurement noise is low. The pressure field suffers due to a small domain size; however, the surface pressures estimates fare much better. The far-field estimation contains similar frequency content with reduced magnitudes, attributed to the exclusion of the viscous forces in Curle's analogy. In the absence of added noise, the estimation procedure performs quite nicely for this model problem. The method is tested experimentally on a 650,000 chord-Reynolds-number flow over a 2-D, 20% thick, elliptic circulation control airfoil. Slot jet momentum coefficients of 0 and 0.10 are investigated. Particle image velocimetry, unsteady
Examination of a Rotorcraft Noise Prediction Method and Comparison to Flight Test Data
NASA Technical Reports Server (NTRS)
Boyd, D. Douglas, Jr.; Greenwood, Eric; Watts, Michael E.; Lopes, Leonard V.
2017-01-01
With a view that rotorcraft noise should be included in the preliminary design process, a relatively fast noise prediction method is examined in this paper. A comprehensive rotorcraft analysis is combined with a noise prediction method to compute several noise metrics of interest. These predictions are compared to flight test data. Results show that inclusion of only the main rotor noise will produce results that severely underpredict integrated metrics of interest. Inclusion of the tail rotor frequency content is essential for accurately predicting these integrated noise metrics.
Unstructured CFD and Noise Prediction Methods for Propulsion Airframe Aeroacoustics
NASA Technical Reports Server (NTRS)
Pao, S. Paul; Abdol-Hamid, Khaled S.; Campbell, Richard L.; Hunter, Craig A.; Massey, Steven J.; Elmiligui, Alaa A.
2006-01-01
Using unstructured mesh CFD methods for Propulsion Airframe Aeroacoustics (PAA) analysis has the distinct advantage of precise and fast computational mesh generation for complex propulsion and airframe integration arrangements that include engine inlet, exhaust nozzles, pylon, wing, flaps, and flap deployment mechanical parts. However, accurate solution values of shear layer velocity, temperature and turbulence are extremely important for evaluating the usually small noise differentials of potential applications to commercial transport aircraft propulsion integration. This paper describes a set of calibration computations for an isolated separate flow bypass ratio five engine nozzle model and the same nozzle system with a pylon. These configurations have measured data along with prior CFD solutions and noise predictions using a proven structured mesh method, which can be used for comparison to the unstructured mesh solutions obtained in this investigation. This numerical investigation utilized the TetrUSS system that includes a Navier-Stokes solver, the associated unstructured mesh generation tools, post-processing utilities, plus some recently added enhancements to the system. New features necessary for this study include the addition of two equation turbulence models to the USM3D code, an h-refinement utility to enhance mesh density in the shear mixing region, and a flow adaptive mesh redistribution method. In addition, a computational procedure was developed to optimize both solution accuracy and mesh economy. Noise predictions were completed using an unstructured mesh version of the JeT3D code.
An experiment in hurricane track prediction using parallel computing methods
NASA Technical Reports Server (NTRS)
Song, Chang G.; Jwo, Jung-Sing; Lakshmivarahan, S.; Dhall, S. K.; Lewis, John M.; Velden, Christopher S.
1994-01-01
The barotropic model is used to explore the advantages of parallel processing in deterministic forecasting. We apply this model to the track forecasting of hurricane Elena (1985). In this particular application, solutions to systems of elliptic equations are the essence of the computational mechanics. One set of equations is associated with the decomposition of the wind into irrotational and nondivergent components - this determines the initial nondivergent state. Another set is associated with recovery of the streamfunction from the forecasted vorticity. We demonstrate that direct parallel methods based on accelerated block cyclic reduction (BCR) significantly reduce the computational time required to solve the elliptic equations germane to this decomposition and forecast problem. A 72-h track prediction was made using incremental time steps of 16 min on a network of 3000 grid points nominally separated by 100 km. The prediction took 30 sec on the 8-processor Alliant FX/8 computer. This was a speed-up of 3.7 when compared to the one-processor version. The 72-h prediction of Elena's track was made as the storm moved toward Florida's west coast. Approximately 200 km west of Tampa Bay, Elena executed a dramatic recurvature that ultimately changed its course toward the northwest. Although the barotropic track forecast was unable to capture the hurricane's tight cycloidal looping maneuver, the subsequent northwesterly movement was accurately forecasted as was the location and timing of landfall near Mobile Bay.
A Primer In Advanced Fatigue Life Prediction Methods
NASA Technical Reports Server (NTRS)
Halford, Gary R.
2000-01-01
Metal fatigue has plagued structural components for centuries, and it remains a critical durability issue in today's aerospace hardware. This is true despite vastly improved and advanced materials, increased mechanistic understanding, and development of accurate structural analysis and advanced fatigue life prediction tools. Each advance is quickly taken advantage of to produce safer, more reliable more cost effective, and better performing products. In other words, as the envelop is expanded, components are then designed to operate just as close to the newly expanded envelop as they were to the initial one. The problem is perennial. The economic importance of addressing structural durability issues early in the design process is emphasized. Tradeoffs with performance, cost, and legislated restrictions are pointed out. Several aspects of structural durability of advanced systems, advanced materials and advanced fatigue life prediction methods are presented. Specific items include the basic elements of durability analysis, conventional designs, barriers to be overcome for advanced systems, high-temperature life prediction for both creep-fatigue and thermomechanical fatigue, mean stress effects, multiaxial stress-strain states, and cumulative fatigue damage accumulation assessment.
Novel micelle PCR-based method for accurate, sensitive and quantitative microbiota profiling.
Boers, Stefan A; Hays, John P; Jansen, Ruud
2017-04-05
In the last decade, many researchers have embraced 16S rRNA gene sequencing techniques, which has led to a wealth of publications and documented differences in the composition of microbial communities derived from many different ecosystems. However, comparison between different microbiota studies is currently very difficult due to the lack of a standardized 16S rRNA gene sequencing protocol. Here we report on a novel approach employing micelle PCR (micPCR) in combination with an internal calibrator that allows for standardization of microbiota profiles via their absolute abundances. The addition of an internal calibrator allows the researcher to express the resulting operational taxonomic units (OTUs) as a measure of 16S rRNA gene copies by correcting the number of sequences of each individual OTU in a sample for efficiency differences in the NGS process. Additionally, accurate quantification of OTUs obtained from negative extraction control samples allows for the subtraction of contaminating bacterial DNA derived from the laboratory environment or chemicals/reagents used. Using equimolar synthetic microbial community samples and low biomass clinical samples, we demonstrate that the calibrated micPCR/NGS methodology possess a much higher precision and a lower limit of detection compared with traditional PCR/NGS, resulting in more accurate microbiota profiles suitable for multi-study comparison.
Novel micelle PCR-based method for accurate, sensitive and quantitative microbiota profiling
Boers, Stefan A.; Hays, John P.; Jansen, Ruud
2017-01-01
In the last decade, many researchers have embraced 16S rRNA gene sequencing techniques, which has led to a wealth of publications and documented differences in the composition of microbial communities derived from many different ecosystems. However, comparison between different microbiota studies is currently very difficult due to the lack of a standardized 16S rRNA gene sequencing protocol. Here we report on a novel approach employing micelle PCR (micPCR) in combination with an internal calibrator that allows for standardization of microbiota profiles via their absolute abundances. The addition of an internal calibrator allows the researcher to express the resulting operational taxonomic units (OTUs) as a measure of 16S rRNA gene copies by correcting the number of sequences of each individual OTU in a sample for efficiency differences in the NGS process. Additionally, accurate quantification of OTUs obtained from negative extraction control samples allows for the subtraction of contaminating bacterial DNA derived from the laboratory environment or chemicals/reagents used. Using equimolar synthetic microbial community samples and low biomass clinical samples, we demonstrate that the calibrated micPCR/NGS methodology possess a much higher precision and a lower limit of detection compared with traditional PCR/NGS, resulting in more accurate microbiota profiles suitable for multi-study comparison. PMID:28378789
gitter: a robust and accurate method for quantification of colony sizes from plate images.
Wagih, Omar; Parts, Leopold
2014-03-20
Colony-based screens that quantify the fitness of clonal populations on solid agar plates are perhaps the most important source of genome-scale functional information in microorganisms. The images of ordered arrays of mutants produced by such experiments can be difficult to process because of laboratory-specific plate features, morphed colonies, plate edges, noise, and other artifacts. Most of the tools developed to address this problem are optimized to handle a single setup and do not work out of the box in other settings. We present gitter, an image analysis tool for robust and accurate processing of images from colony-based screens. gitter works by first finding the grid of colonies from a preprocessed image and then locating the bounds of each colony separately. We show that gitter produces comparable colony sizes to other tools in simple cases but outperforms them by being able to handle a wider variety of screens and more accurately quantify colony sizes from difficult images. gitter is freely available as an R package from http://cran.r-project.org/web/packages/gitter under the LGPL. Tutorials and demos can be found at http://omarwagih.github.io/gitter.
ERIC Educational Resources Information Center
Lin, Jing-Wen
2016-01-01
Holding scientific conceptions and having the ability to accurately predict students' preconceptions are a prerequisite for science teachers to design appropriate constructivist-oriented learning experiences. This study explored the types and sources of students' preconceptions of electric circuits. First, 438 grade 3 (9 years old) students were…
ERIC Educational Resources Information Center
Beare, R. A.
2008-01-01
Professional astronomers use specialized software not normally available to students to determine the rotation periods of asteroids from fragmented light curve data. This paper describes a simple yet accurate method based on Microsoft Excel[R] that enables students to find periods in asteroid light curve and other discontinuous time series data of…
Brown, Sheldon T.; Tate, Janet P.; Kyriakides, Tassos C.; Kirkwood, Katherine A.; Holodniy, Mark; Goulet, Joseph L.; Angus, Brian J.; Cameron, D. William; Justice, Amy C.
2014-01-01
Objectives The VACS Index is highly predictive of all-cause mortality among HIV infected individuals within the first few years of combination antiretroviral therapy (cART). However, its accuracy among highly treatment experienced individuals and its responsiveness to treatment interventions have yet to be evaluated. We compared the accuracy and responsiveness of the VACS Index with a Restricted Index of age and traditional HIV biomarkers among patients enrolled in the OPTIMA study. Methods Using data from 324/339 (96%) patients in OPTIMA, we evaluated associations between indices and mortality using Kaplan-Meier estimates, proportional hazards models, Harrel’s C-statistic and net reclassification improvement (NRI). We also determined the association between study interventions and risk scores over time, and change in score and mortality. Results Both the Restricted Index (c = 0.70) and VACS Index (c = 0.74) predicted mortality from baseline, but discrimination was improved with the VACS Index (NRI = 23%). Change in score from baseline to 48 weeks was more strongly associated with survival for the VACS Index than the Restricted Index with respective hazard ratios of 0.26 (95% CI 0.14–0.49) and 0.39(95% CI 0.22–0.70) among the 25% most improved scores, and 2.08 (95% CI 1.27–3.38) and 1.51 (95%CI 0.90–2.53) for the 25% least improved scores. Conclusions The VACS Index predicts all-cause mortality more accurately among multi-drug resistant, treatment experienced individuals and is more responsive to changes in risk associated with treatment intervention than an index restricted to age and HIV biomarkers. The VACS Index holds promise as an intermediate outcome for intervention research. PMID:24667813
Accurate surface tension measurement of glass melts by the pendant drop method.
Chang, Yao-Yuan; Wu, Ming-Ya; Hung, Yi-Lin; Lin, Shi-Yow
2011-05-01
A pendant drop tensiometer, coupled with image digitization technology and a best-fitting algorithm, was built to accurately measure the surface tension of glass melts at high temperatures. More than one thousand edge-coordinate points were obtained for a pendant glass drop. These edge points were fitted with the theoretical drop profiles derived from the Young-Laplace equation to determine the surface tension of glass melt. The uncertainty of the surface tension measurements was investigated. The measurement uncertainty (σ) could be related to a newly defined factor of drop profile completeness (Fc): the larger the Fc is, the smaller σ is. Experimental data showed that the uncertainty of the surface tension measurement when using this pendant drop tensiometer could be ±3 mN∕m for glass melts.
Osteoporosis risk prediction using machine learning and conventional methods.
Kim, Sung Kean; Yoo, Tae Keun; Oh, Ein; Kim, Deok Won
2013-01-01
A number of clinical decision tools for osteoporosis risk assessment have been developed to select postmenopausal women for the measurement of bone mineral density. We developed and validated machine learning models with the aim of more accurately identifying the risk of osteoporosis in postmenopausal women, and compared with the ability of a conventional clinical decision tool, osteoporosis self-assessment tool (OST). We collected medical records from Korean postmenopausal women based on the Korea National Health and Nutrition Surveys (KNHANES V-1). The training data set was used to construct models based on popular machine learning algorithms such as support vector machines (SVM), random forests (RF), artificial neural networks (ANN), and logistic regression (LR) based on various predictors associated with low bone density. The learning models were compared with OST. SVM had significantly better area under the curve (AUC) of the receiver operating characteristic (ROC) than ANN, LR, and OST. Validation on the test set showed that SVM predicted osteoporosis risk with an AUC of 0.827, accuracy of 76.7%, sensitivity of 77.8%, and specificity of 76.0%. We were the first to perform comparisons of the performance of osteoporosis prediction between the machine learning and conventional methods using population-based epidemiological data. The machine learning methods may be effective tools for identifying postmenopausal women at high risk for osteoporosis.
Essaghir, Ahmed; Toffalini, Federica; Knoops, Laurent; Kallin, Anders; van Helden, Jacques; Demoulin, Jean-Baptiste
2010-01-01
Deciphering transcription factor networks from microarray data remains difficult. This study presents a simple method to infer the regulation of transcription factors from microarray data based on well-characterized target genes. We generated a catalog containing transcription factors associated with 2720 target genes and 6401 experimentally validated regulations. When it was available, a distinction between transcriptional activation and inhibition was included for each regulation. Next, we built a tool (www.tfacts.org) that compares submitted gene lists with target genes in the catalog to detect regulated transcription factors. TFactS was validated with published lists of regulated genes in various models and compared to tools based on in silico promoter analysis. We next analyzed the NCI60 cancer microarray data set and showed the regulation of SOX10, MITF and JUN in melanomas. We then performed microarray experiments comparing gene expression response of human fibroblasts stimulated by different growth factors. TFactS predicted the specific activation of Signal transducer and activator of transcription factors by PDGF-BB, which was confirmed experimentally. Our results show that the expression levels of transcription factor target genes constitute a robust signature for transcription factor regulation, and can be efficiently used for microarray data mining. PMID:20215436
Prediction Methods in Solar Sunspots Cycles
Ng, Kim Kwee
2016-01-01
An understanding of the Ohl’s Precursor Method, which is used to predict the upcoming sunspots activity, is presented by employing a simplified movable divided-blocks diagram. Using a new approach, the total number of sunspots in a solar cycle and the maximum averaged monthly sunspots number Rz(max) are both shown to be statistically related to the geomagnetic activity index in the prior solar cycle. The correlation factors are significant and they are respectively found to be 0.91 ± 0.13 and 0.85 ± 0.17. The projected result is consistent with the current observation of solar cycle 24 which appears to have attained at least Rz(max) at 78.7 ± 11.7 in March 2014. Moreover, in a statistical study of the time-delayed solar events, the average time between the peak in the monthly geomagnetic index and the peak in the monthly sunspots numbers in the succeeding ascending phase of the sunspot activity is found to be 57.6 ± 3.1 months. The statistically determined time-delayed interval confirms earlier observational results by others that the Sun’s electromagnetic dipole is moving toward the Sun’s Equator during a solar cycle. PMID:26868269
Time-Accurate, Unstructured-Mesh Navier-Stokes Computations with the Space-Time CESE Method
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan
2006-01-01
Application of the newly emerged space-time conservation element solution element (CESE) method to compressible Navier-Stokes equations is studied. In contrast to Euler equations solvers, several issues such as boundary conditions, numerical dissipation, and grid stiffness warrant systematic investigations and validations. Non-reflecting boundary conditions applied at the truncated boundary are also investigated from the stand point of acoustic wave propagation. Validations of the numerical solutions are performed by comparing with exact solutions for steady-state as well as time-accurate viscous flow problems. The test cases cover a broad speed regime for problems ranging from acoustic wave propagation to 3D hypersonic configurations. Model problems pertinent to hypersonic configurations demonstrate the effectiveness of the CESE method in treating flows with shocks, unsteady waves, and separations. Good agreement with exact solutions suggests that the space-time CESE method provides a viable alternative for time-accurate Navier-Stokes calculations of a broad range of problems.
Kolin, David L.; Ronis, David; Wiseman, Paul W.
2006-01-01
We present the theory and application of reciprocal space image correlation spectroscopy (kICS). This technique measures the number density, diffusion coefficient, and velocity of fluorescently labeled macromolecules in a cell membrane imaged on a confocal, two-photon, or total internal reflection fluorescence microscope. In contrast to r-space correlation techniques, we show kICS can recover accurate dynamics even in the presence of complex fluorophore photobleaching and/or “blinking”. Furthermore, these quantities can be calculated without nonlinear curve fitting, or any knowledge of the beam radius of the exciting laser. The number densities calculated by kICS are less sensitive to spatial inhomogeneity of the fluorophore distribution than densities measured using image correlation spectroscopy. We use simulations as a proof-of-principle to show that number densities and transport coefficients can be extracted using this technique. We present calibration measurements with fluorescent microspheres imaged on a confocal microscope, which recover Stokes-Einstein diffusion coefficients, and flow velocities that agree with single particle tracking measurements. We also show the application of kICS to measurements of the transport dynamics of α5-integrin/enhanced green fluorescent protein constructs in a transfected CHO cell imaged on a total internal reflection fluorescence microscope using charge-coupled device area detection. PMID:16861272
An Improved Method for Accurate and Rapid Measurement of Flight Performance in Drosophila
Babcock, Daniel T.; Ganetzky, Barry
2014-01-01
Drosophila has proven to be a useful model system for analysis of behavior, including flight. The initial flight tester involved dropping flies into an oil-coated graduated cylinder; landing height provided a measure of flight performance by assessing how far flies will fall before producing enough thrust to make contact with the wall of the cylinder. Here we describe an updated version of the flight tester with four major improvements. First, we added a "drop tube" to ensure that all flies enter the flight cylinder at a similar velocity between trials, eliminating variability between users. Second, we replaced the oil coating with removable plastic sheets coated in Tangle-Trap, an adhesive designed to capture live insects. Third, we use a longer cylinder to enable more accurate discrimination of flight ability. Fourth we use a digital camera and imaging software to automate the scoring of flight performance. These improvements allow for the rapid, quantitative assessment of flight behavior, useful for large datasets and large-scale genetic screens. PMID:24561810
NASA Astrophysics Data System (ADS)
Serag, Ahmed; Blesa, Manuel; Moore, Emma J.; Pataky, Rozalia; Sparrow, Sarah A.; Wilkinson, A. G.; MacNaught, Gillian; Semple, Scott I.; Boardman, James P.
2016-03-01
Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases ‘uniformly’ distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.
Serag, Ahmed; Blesa, Manuel; Moore, Emma J; Pataky, Rozalia; Sparrow, Sarah A; Wilkinson, A G; Macnaught, Gillian; Semple, Scott I; Boardman, James P
2016-03-24
Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases 'uniformly' distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.
ERIC Educational Resources Information Center
Hughes, Stephen W.
2005-01-01
A little-known method of measuring the volume of small objects based on Archimedes' principle is described, which involves suspending an object in a water-filled container placed on electronic scales. The suspension technique is a variation on the hydrostatic weighing technique used for measuring volume. The suspension method was compared with two…
Calculation of accurate channel spacing of an AWG optical demultiplexer applying proportional method
NASA Astrophysics Data System (ADS)
Seyringer, D.; Hodzic, E.
2015-06-01
We present the proportional method to correct the channel spacing between the transmitted output channels of an AWG. The developed proportional method was applied to 64-channel, 50 GHz AWG and the achieved results confirm very good correlation between designed channel spacing (50 GHz) and the channel spacing calculated from simulated AWG transmission characteristics.
A second-order accurate kinetic-theory-based method for inviscid compressible flows
NASA Technical Reports Server (NTRS)
Deshpande, Suresh M.
1986-01-01
An upwind method for the numerical solution of the Euler equations is presented. This method, called the kinetic numerical method (KNM), is based on the fact that the Euler equations are moments of the Boltzmann equation of the kinetic theory of gases when the distribution function is Maxwellian. The KNM consists of two phases, the convection phase and the collision phase. The method is unconditionally stable and explicit. It is highly vectorizable and can be easily made total variation diminishing for the distribution function by a suitable choice of the interpolation strategy. The method is applied to a one-dimensional shock-propagation problem and to a two-dimensional shock-reflection problem.
NASA Astrophysics Data System (ADS)
Wang, Benfeng; Jakobsen, Morten; Wu, Ru-Shan; Lu, Wenkai; Chen, Xiaohong
2017-03-01
Full waveform inversion (FWI) has been regarded as an effective tool to build the velocity model for the following pre-stack depth migration. Traditional inversion methods are built on Born approximation and are initial model dependent, while this problem can be avoided by introducing Transmission matrix (T-matrix), because the T-matrix includes all orders of scattering effects. The T-matrix can be estimated from the spatial aperture and frequency bandwidth limited seismic data using linear optimization methods. However the full T-matrix inversion method (FTIM) is always required in order to estimate velocity perturbations, which is very time consuming. The efficiency can be improved using the previously proposed inverse thin-slab propagator (ITSP) method, especially for large scale models. However, the ITSP method is currently designed for smooth media, therefore the estimation results are unsatisfactory when the velocity perturbation is relatively large. In this paper, we propose a domain decomposition method (DDM) to improve the efficiency of the velocity estimation for models with large perturbations, as well as guarantee the estimation accuracy. Numerical examples for smooth Gaussian ball models and a reservoir model with sharp boundaries are performed using the ITSP method, the proposed DDM and the FTIM. The estimated velocity distributions, the relative errors and the elapsed time all demonstrate the validity of the proposed DDM.
Wu, Anan; Xu, Xin
2012-06-15
We present a method, named DCMB, for the calculations of large molecules. It is a combination of a parallel divide-and-conquer (DC) method and a mixed-basis (MB) set scheme. In this approach, atomic forces, total energy and vibrational frequencies are obtained from a series of MB calculations, which are derived from the target system utilizing the DC concept. Unlike the fragmentation based methods, all DCMB calculations are performed over the whole target system and no artificial caps are introduced so that it is particularly useful for charged and/or delocalized systems. By comparing the DCMB results with those from the conventional method, we demonstrate that DCMB is capable of providing accurate prediction of molecular geometries, total energies, and vibrational frequencies of molecules of general interest. We also demonstrate that the high efficiency of the parallel DCMB code holds the promise for a routine geometry optimization of large complex systems.
Methods for Applying Accurate Digital PCR Analysis on Low Copy DNA Samples
Whale, Alexandra S.; Cowen, Simon; Foy, Carole A.; Huggett, Jim F.
2013-01-01
Digital PCR (dPCR) is a highly accurate molecular approach, capable of precise measurements, offering a number of unique opportunities. However, in its current format dPCR can be limited by the amount of sample that can be analysed and consequently additional considerations such as performing multiplex reactions or pre-amplification can be considered. This study investigated the impact of duplexing and pre-amplification on dPCR analysis by using three different assays targeting a model template (a portion of the Arabidopsis thaliana alcohol dehydrogenase gene). We also investigated the impact of different template types (linearised plasmid clone and more complex genomic DNA) on measurement precision using dPCR. We were able to demonstrate that duplex dPCR can provide a more precise measurement than uniplex dPCR, while applying pre-amplification or varying template type can significantly decrease the precision of dPCR. Furthermore, we also demonstrate that the pre-amplification step can introduce measurement bias that is not consistent between experiments for a sample or assay and so could not be compensated for during the analysis of this data set. We also describe a model for estimating the prevalence of molecular dropout and identify this as a source of dPCR imprecision. Our data have demonstrated that the precision afforded by dPCR at low sample concentration can exceed that of the same template post pre-amplification thereby negating the need for this additional step. Our findings also highlight the technical differences between different templates types containing the same sequence that must be considered if plasmid DNA is to be used to assess or control for more complex templates like genomic DNA. PMID:23472156
Is photometry an accurate and reliable method to assess boar semen concentration?
Camus, A; Camugli, S; Lévêque, C; Schmitt, E; Staub, C
2011-02-01
Sperm concentration assessment is a key point to insure appropriate sperm number per dose in species subjected to artificial insemination (AI). The aim of the present study was to evaluate the accuracy and reliability of two commercially available photometers, AccuCell™ and AccuRead™ pre-calibrated for boar semen in comparison to UltiMate™ boar version 12.3D, NucleoCounter SP100 and Thoma hemacytometer. For each type of instrument, concentration was measured on 34 boar semen samples in quadruplicate and agreement between measurements and instruments were evaluated. Accuracy for both photometers was illustrated by mean of percentage differences to the general mean. It was -0.6% and 0.5% for Accucell™ and Accuread™ respectively, no significant differences were found between instrument and mean of measurement among all equipment. Repeatability for both photometers was 1.8% and 3.2% for AccuCell™ and AccuRead™ respectively. Low differences were observed between instruments (confidence interval 3%) except when hemacytometer was used as a reference. Even though hemacytometer is considered worldwide as the gold standard, it is the more variable instrument (confidence interval 7.1%). The conclusion is that routine photometry measures of raw semen concentration are reliable, accurate and precise using AccuRead™ or AccuCell™. There are multiple steps in semen processing that can induce sperm loss and therefore increase differences between theoretical and real sperm numbers in doses. Potential biases that depend on the workflow but not on the initial photometric measure of semen concentration are discussed.
Accurate Hf isotope determinations of complex zircons using the "laser ablation split stream" method
NASA Astrophysics Data System (ADS)
Fisher, Christopher M.; Vervoort, Jeffery D.; DuFrane, S. Andrew
2014-01-01
The "laser ablation split stream" (LASS) technique is a powerful tool for mineral-scale isotope analyses and in particular, for concurrent determination of age and Hf isotope composition of zircon. Because LASS utilizes two independent mass spectrometers, a large range of masses can be measured during a single ablation, and thus, the same sample volume can be analyzed for multiple geochemical systems. This paper describes a simple analytical setup using a laser ablation system coupled to a single-collector (for U-Pb age determination) and a multicollector (for Hf isotope analyses) inductively coupled plasma mass spectrometer (MC-ICPMS). The ability of the LASS for concurrent Hf + age technique to extract meaningful Hf isotope compositions in isotopically zoned zircon is demonstrated using zircons from two Proterozoic gneisses from northern Idaho, USA. These samples illustrate the potential problems associated with inadvertently sampling multiple age and Hf components in zircons, as well as the potential of LASS to recover meaningful Hf isotope compositions. We suggest that such inadvertent sampling of differing age and Hf components can be a significant cause of excess scatter in Hf isotope analyses and demonstrate that the LASS approach offers a robust solution to these issues. The veracity of the approach is demonstrated by accurate analyses of 10 reference zircons with well-characterized age and Hf isotopic composition, using laser spot diameters of 30 and 40 µm. In order to expand the database of high-precision Lu-Hf isotope analyses of reference zircons, we present 27 new isotope dilution-MC-ICPMS Lu-Hf isotope measurements of five U-Pb zircon standards: FC1, Temora, R33, QGNG, and 91500.
A flux monitoring method for easy and accurate flow rate measurement in pressure-driven flows.
Siria, Alessandro; Biance, Anne-Laure; Ybert, Christophe; Bocquet, Lydéric
2012-03-07
We propose a low-cost and versatile method to measure flow rate in microfluidic channels under pressure-driven flows, thereby providing a simple characterization of the hydrodynamic permeability of the system. The technique is inspired by the current monitoring method usually employed to characterize electro-osmotic flows, and makes use of the measurement of the time-dependent electric resistance inside the channel associated with a moving salt front. We have successfully tested the method in a micrometer-size channel, as well as in a complex microfluidic channel with a varying cross-section, demonstrating its ability in detecting internal shape variations.
González, Lorenzo; Thorne, Leigh; Jeffrey, Martin; Martin, Stuart; Spiropoulos, John; Beck, Katy E; Lockey, Richard W; Vickery, Christopher M; Holder, Thomas; Terry, Linda
2012-11-01
It is widely accepted that abnormal forms of the prion protein (PrP) are the best surrogate marker for the infectious agent of prion diseases and, in practice, the detection of such disease-associated (PrP(d)) and/or protease-resistant (PrP(res)) forms of PrP is the cornerstone of diagnosis and surveillance of the transmissible spongiform encephalopathies (TSEs). Nevertheless, some studies question the consistent association between infectivity and abnormal PrP detection. To address this discrepancy, 11 brain samples of sheep affected with natural scrapie or experimental bovine spongiform encephalopathy were selected on the basis of the magnitude and predominant types of PrP(d) accumulation, as shown by immunohistochemical (IHC) examination; contra-lateral hemi-brain samples were inoculated at three different dilutions into transgenic mice overexpressing ovine PrP and were also subjected to quantitative analysis by three biochemical tests (BCTs). Six samples gave 'low' infectious titres (10⁶·⁵ to 10⁶·⁷ LD₅₀ g⁻¹) and five gave 'high titres' (10⁸·¹ to ≥ 10⁸·⁷ LD₅₀ g⁻¹) and, with the exception of the Western blot analysis, those two groups tended to correspond with samples with lower PrP(d)/PrP(res) results by IHC/BCTs. However, no statistical association could be confirmed due to high individual sample variability. It is concluded that although detection of abnormal forms of PrP by laboratory methods remains useful to confirm TSE infection, infectivity titres cannot be predicted from quantitative test results, at least for the TSE sources and host PRNP genotypes used in this study. Furthermore, the near inverse correlation between infectious titres and Western blot results (high protease pre-treatment) argues for a dissociation between infectivity and PrP(res).
Three-Signal Method for Accurate Measurements of Depolarization Ratio with Lidar
NASA Technical Reports Server (NTRS)
Reichardt, Jens; Baumgart, Rudolf; McGee, Thomsa J.
2003-01-01
A method is presented that permits the determination of atmospheric depolarization-ratio profiles from three elastic-backscatter lidar signals with different sensitivity to the state of polarization of the backscattered light. The three-signal method is insensitive to experimental errors and does not require calibration of the measurement, which could cause large systematic uncertainties of the results, as is the case in the lidar technique conventionally used for the observation of depolarization ratios.
NASA Technical Reports Server (NTRS)
Desmarais, R. N.
1982-01-01
The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.
Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori
2015-05-07
The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of 〈UUV〉/2 (〈UUV〉 is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since 〈UUV〉 can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.
Highly effective and accurate weak point monitoring method for advanced design rule (1x nm) devices
NASA Astrophysics Data System (ADS)
Ahn, Jeongho; Seong, ShiJin; Yoon, Minjung; Park, Il-Suk; Kim, HyungSeop; Ihm, Dongchul; Chin, Soobok; Sivaraman, Gangadharan; Li, Mingwei; Babulnath, Raghav; Lee, Chang Ho; Kurada, Satya; Brown, Christine; Galani, Rajiv; Kim, JaeHyun
2014-04-01
Historically when we used to manufacture semiconductor devices for 45 nm or above design rules, IC manufacturing yield was mainly determined by global random variations and therefore the chip manufacturers / manufacturing team were mainly responsible for yield improvement. With the introduction of sub-45 nm semiconductor technologies, yield started to be dominated by systematic variations, primarily centered on resolution problems, copper/low-k interconnects and CMP. These local systematic variations, which have become decisively greater than global random variations, are design-dependent [1, 2] and therefore designers now share the responsibility of increasing yield with manufacturers / manufacturing teams. A widening manufacturing gap has led to a dramatic increase in design rules that are either too restrictive or do not guarantee a litho/etch hotspot-free design. The semiconductor industry is currently limited to 193 nm scanners and no relief is expected from the equipment side to prevent / eliminate these systematic hotspots. Hence we have seen a lot of design houses coming up with innovative design products to check hotspots based on model based lithography checks to validate design manufacturability, which will also account for complex two-dimensional effects that stem from aggressive scaling of 193 nm lithography. Most of these hotspots (a.k.a., weak points) are especially seen on Back End of the Line (BEOL) process levels like Mx ADI, Mx Etch and Mx CMP. Inspecting some of these BEOL levels can be extremely challenging as there are lots of wafer noises or nuisances that can hinder an inspector's ability to detect and monitor the defects or weak points of interest. In this work we have attempted to accurately inspect the weak points using a novel broadband plasma optical inspection approach that enhances defect signal from patterns of interest (POI) and precisely suppresses surrounding wafer noises. This new approach is a paradigm shift in wafer inspection
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
An Accurate Method for Free Vibration Analysis of Structures with Application to Plates
NASA Astrophysics Data System (ADS)
KEVORKIAN, S.; PASCAL, M.
2001-10-01
In this work, the continuous element method which has been used as an alternative to the finite element method of vibration analysis of frames is applied to more general structures like 3-D continuum and rectangular plates. The method is based on the concept of the so-called impedance matrix giving in the frequency domain, the linear relation between the generalized displacements of the boundaries and the generalized forces exerted on these boundaries. For a 3-D continuum, the concept of impedance matrix is introduced assuming a particular kind of boundary conditions. For rectangular plates, this new development leads to the solution of vibration problems for boundary conditions other than the simply supported ones.
A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates
NASA Astrophysics Data System (ADS)
Savanevych, V. E.; Briukhovetskyi, O. B.; Sokovikova, N. S.; Bezkrovny, M. M.; Vavilova, I. B.; Ivashchenko, Yu. M.; Elenin, L. V.; Khlamov, S. V.; Movsesian, Ia. S.; Dashkova, A. M.; Pogorelov, A. V.
2015-08-01
We describe a new iteration method to estimate asteroid coordinates, based on a subpixel Gaussian model of the discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixel potentials) of the CCD frame. In this model, the kind of coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The method that is developed, which is flexible in adapting to any form of object image, has a high measurement accuracy along with a low calculating complexity, due to the maximum-likelihood procedure that is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for minimization of the quadratic form. Since 2010, the method has been tested as the basis of our Collection Light Technology (COLITEC) software, which has been installed at several observatories across the world with the aim of the automatic discovery of asteroids and comets in sets of CCD frames. As a result, four comets (C/2010 X1 (Elenin), P/2011 NO1(Elenin), C/2012 S1 (ISON) and P/2013 V3 (Nevski)) as well as more than 1500 small Solar system bodies (including five near-Earth objects (NEOs), 21 Trojan asteroids of Jupiter and one Centaur object) have been discovered. We discuss these results, which allowed us to compare the accuracy parameters of the new method and confirm its efficiency. In 2014, the COLITEC software was recommended to all members of the Gaia-FUN-SSO network for analysing observations as a tool to detect faint moving objects in frames.
Nonlinear Methods Applied to Atmospheric Prediction
2007-11-02
states being a minimum and the spatial correlation being a maximum to determine the best analogs. They also started exploring the value of finding...the best analog of each of the trajectories in the ensemble of numerical predictions from the start of the prediction to the verification time. These...Benard convection, with the fluid layer heated below and cooled above, cellular convection occurs with cells of width very nearly equal to their
Direct Coupling Method for Time-Accurate Solution of Incompressible Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Soh, Woo Y.
1992-01-01
A noniterative finite difference numerical method is presented for the solution of the incompressible Navier-Stokes equations with second order accuracy in time and space. Explicit treatment of convection and diffusion terms and implicit treatment of the pressure gradient give a single pressure Poisson equation when the discretized momentum and continuity equations are combined. A pressure boundary condition is not needed on solid boundaries in the staggered mesh system. The solution of the pressure Poisson equation is obtained directly by Gaussian elimination. This method is tested on flow problems in a driven cavity and a curved duct.
NASA Astrophysics Data System (ADS)
Liu, Qianlong
2011-09-01
Prosperetti's seminal Physalis method, an Immersed Boundary/spectral method, had been used extensively to investigate fluid flows with suspended solid particles. Its underlying idea of creating a cage and using a spectral general analytical solution around a discontinuity in a surrounding field as a computational mechanism to enable the accommodation of physical and geometric discontinuities is a general concept, and can be applied to other problems of importance to physics, mechanics, and chemistry. In this paper we provide a foundation for the application of this approach to the determination of the distribution of electric charge in heterogeneous mixtures of dielectrics and conductors. The proposed Physalis method is remarkably accurate and efficient. In the method, a spectral analytical solution is used to tackle the discontinuity and thus the discontinuous boundary conditions at the interface of two media are satisfied exactly. Owing to the hybrid finite difference and spectral schemes, the method is spectrally accurate if the modes are not sufficiently resolved, while higher than second-order accurate if the modes are sufficiently resolved, for the solved potential field. Because of the features of the analytical solutions, the derivative quantities of importance, such as electric field, charge distribution, and force, have the same order of accuracy as the solved potential field during postprocessing. This is an important advantage of the Physalis method over other numerical methods involving interpolation, differentiation, and integration during postprocessing, which may significantly degrade the accuracy of the derivative quantities of importance. The analytical solutions enable the user to use relatively few mesh points to accurately represent the regions of discontinuity. In addition, the spectral convergence and a linear relationship between the cost of computer memory/computation and particle numbers results in a very efficient method. In the present
Rorick, Amber; Michael, Matthew A; Yang, Liu; Zhang, Yong
2015-09-03
Oxygen is an important element in most biologically significant molecules, and experimental solid-state (17)O NMR studies have provided numerous useful structural probes to study these systems. However, computational predictions of solid-state (17)O NMR chemical shift tensor properties are still challenging in many cases, and in particular, each of the prior computational works is basically limited to one type of oxygen-containing system. This work provides the first systematic study of the effects of geometry refinement, method, and basis sets for metal and nonmetal elements in both geometry optimization and NMR property calculations of some biologically relevant oxygen-containing compounds with a good variety of XO bonding groups (X = H, C, N, P, and metal). The experimental range studied is of 1455 ppm, a major part of the reported (17)O NMR chemical shifts in organic and organometallic compounds. A number of computational factors toward relatively general and accurate predictions of (17)O NMR chemical shifts were studied to provide helpful and detailed suggestions for future work. For the studied kinds of oxygen-containing compounds, the best computational approach results in a theory-versus-experiment correlation coefficient (R(2)) value of 0.9880 and a mean absolute deviation of 13 ppm (1.9% of the experimental range) for isotropic NMR shifts and an R(2) value of 0.9926 for all shift-tensor properties. These results shall facilitate future computational studies of (17)O NMR chemical shifts in many biologically relevant systems, and the high accuracy may also help the refinement and determination of active-site structures of some oxygen-containing substrate-bound proteins.
A novel method to accurately locate and count large numbers of steps by photobleaching
Tsekouras, Konstantinos; Custer, Thomas C.; Jashnsaz, Hossein; Walter, Nils G.; Pressé, Steve
2016-01-01
Photobleaching event counting is a single-molecule fluorescence technique that is increasingly being used to determine the stoichiometry of protein and RNA complexes composed of many subunits in vivo as well as in vitro. By tagging protein or RNA subunits with fluorophores, activating them, and subsequently observing as the fluorophores photobleach, one obtains information on the number of subunits in a complex. The noise properties in a photobleaching time trace depend on the number of active fluorescent subunits. Thus, as fluorophores stochastically photobleach, noise properties of the time trace change stochastically, and these varying noise properties have created a challenge in identifying photobleaching steps in a time trace. Although photobleaching steps are often detected by eye, this method only works for high individual fluorophore emission signal-to-noise ratios and small numbers of fluorophores. With filtering methods or currently available algorithms, it is possible to reliably identify photobleaching steps for up to 20–30 fluorophores and signal-to-noise ratios down to ∼1. Here we present a new Bayesian method of counting steps in photobleaching time traces that takes into account stochastic noise variation in addition to complications such as overlapping photobleaching events that may arise from fluorophore interactions, as well as on-off blinking. Our method is capable of detecting ≥50 photobleaching steps even for signal-to-noise ratios as low as 0.1, can find up to ≥500 steps for more favorable noise profiles, and is computationally inexpensive. PMID:27654946
Highly Accurate Beam Torsion Solutions Using the p-Version Finite Element Method
NASA Technical Reports Server (NTRS)
Smith, James P.
1996-01-01
A new treatment of the classical beam torsion boundary value problem is applied. Using the p-version finite element method with shape functions based on Legendre polynomials, torsion solutions for generic cross-sections comprised of isotropic materials are developed. Element shape functions for quadrilateral and triangular elements are discussed, and numerical examples are provided.
Technology Transfer Automated Retrieval System (TEKTRAN)
The rapid advances in analytical chromatography equipment have made the reliable and reproducible measurement of a wide range of plant chemical components possible. Full chemical characterization of a given plant material is possible with the new mass spectrometers currently available. New methods a...
A Robust Method of Vehicle Stability Accurate Measurement Using GPS and INS
NASA Astrophysics Data System (ADS)
Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu
2015-12-01
With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. Integration of Global Positioning System (GPS) and Inertial Navigation System (INS) is a very practical method to get high-precision measurement data. Usually, the Kalman filter is used to fuse the data from GPS and INS. In this paper, a robust method is used to measure vehicle sideslip angle and yaw rate, which are two important parameters for vehicle stability. First, a four-wheel vehicle dynamic model is introduced, based on sideslip angle and yaw rate. Second, a double level Kalman filter is established to fuse the data from Global Positioning System and Inertial Navigation System. Then, this method is simulated on a sample vehicle, using Carsim software to test the sideslip angle and yaw rate. Finally, a real experiment is made to verify the advantage of this approach. The experimental results showed the merits of this method of measurement and estimation, and the approach can meet the design requirements of the vehicle stability controller.
A second-order accurate immersed boundary-lattice Boltzmann method for particle-laden flows
NASA Astrophysics Data System (ADS)
Zhou, Qiang; Fan, Liang-Shih
2014-07-01
A new immersed boundary-lattice Boltzmann method (IB-LBM) is presented for fully resolved simulations of incompressible viscous flows laden with rigid particles. The immersed boundary method (IBM) recently developed by Breugem (2012) [19] is adopted in the present method, development including the retraction technique, the multi-direct forcing method and the direct account of the inertia of the fluid contained within the particles. The present IB-LBM is, however, formulated with further improvement with the implementation of the high-order Runge-Kutta schemes in the coupled fluid-particle interaction. The major challenge to implement high-order Runge-Kutta schemes in the LBM is that the flow information such as density and velocity cannot be directly obtained at a fractional time step from the LBM since the LBM only provides the flow information at an integer time step. This challenge can be, however, overcome as given in the present IB-LBM by extrapolating the flow field around particles from the known flow field at the previous integer time step. The newly calculated fluid-particle interactions from the previous fractional time steps of the current integer time step are also accounted for in the extrapolation. The IB-LBM with high-order Runge-Kutta schemes developed in this study is validated by several benchmark applications. It is demonstrated, for the first time, that the IB-LBM has the capacity to resolve the translational and rotational motion of particles with the second-order accuracy. The optimal retraction distances for spheres and tubes that help the method achieve the second-order accuracy are found to be around 0.30 and -0.47 times of the lattice spacing, respectively. Simulations of the Stokes flow through a simple cubic lattice of rotational spheres indicate that the lift force produced by the Magnus effect can be very significant in view of the magnitude of the drag force when the practical rotating speed of the spheres is encountered. This finding
A new noninvasive method for the accurate and precise assessment of varicose vein diameters.
Baldassarre, Damiano; Pustina, Linda; Castelnuovo, Samuela; Bondioli, Alighiero; Carlà, Matteo; Sirtori, Cesare R
2003-01-01
The feasibility and reproducibility of a new ultrasonic method for the direct assessment of maximal varicose vein diameter (VVD) were evaluated. A study was also performed to demonstrate the capacity of the method to detect changes in venous diameter induced by a pharmacologic treatment. Patients with varicose vein disease were recruited. A method that allows the precise positioning of patient and transducer and performance of scans in a gel-bath was developed. Maximal VVD was recorded both in the standing and supine positions. The intraassay reproducibility was determined by replicate scans made within 15 minutes in both positions. The interobserver variability was assessed by comparing VVDs measured during the first phase baseline examination with those obtained during baseline examinations in the second phase of the study. The error in reproducibility of VVD determinations was 5.3% when diameters were evaluated in the standing position and 6.4% when assessed in the supine position. The intramethod agreement was high, with a bias between readings of 0.06 +/- 0.18 mm and of -0.02 +/- 0.19 mm, respectively, in standing and supine positions. Correlation coefficients were better than 0.99 in both positions. The method appears to be sensitive enough to detect small changes in VVDs induced by treatments. The proposed technique provides a tool of potential valid use in the detection and in vivo monitoring of VVD changes in patients with varicose vein disease. The method offers an innovative approach to obtain a quantitative assessment of varicose vein progression and of treatment effects, thus providing a basis for epidemiologic surveys.
NASA Technical Reports Server (NTRS)
Loh, Ching Y.; Jorgenson, Philip C. E.
2007-01-01
A time-accurate, upwind, finite volume method for computing compressible flows on unstructured grids is presented. The method is second order accurate in space and time and yields high resolution in the presence of discontinuities. For efficiency, the Roe approximate Riemann solver with an entropy correction is employed. In the basic Euler/Navier-Stokes scheme, many concepts of high order upwind schemes are adopted: the surface flux integrals are carefully treated, a Cauchy-Kowalewski time-stepping scheme is used in the time-marching stage, and a multidimensional limiter is applied in the reconstruction stage. However even with these up-to-date improvements, the basic upwind scheme is still plagued by the so-called "pathological behaviors," e.g., the carbuncle phenomenon, the expansion shock, etc. A solution to these limitations is presented which uses a very simple dissipation model while still preserving second order accuracy. This scheme is referred to as the enhanced time-accurate upwind (ETAU) scheme in this paper. The unstructured grid capability renders flexibility for use in complex geometry; and the present ETAU Euler/Navier-Stokes scheme is capable of handling a broad spectrum of flow regimes from high supersonic to subsonic at very low Mach number, appropriate for both CFD (computational fluid dynamics) and CAA (computational aeroacoustics). Numerous examples are included to demonstrate the robustness of the methods.
NASA Astrophysics Data System (ADS)
Zheng, Chang-Jun; Gao, Hai-Feng; Du, Lei; Chen, Hai-Bo; Zhang, Chuanzeng
2016-01-01
An accurate numerical solver is developed in this paper for eigenproblems governed by the Helmholtz equation and formulated through the boundary element method. A contour integral method is used to convert the nonlinear eigenproblem into an ordinary eigenproblem, so that eigenvalues can be extracted accurately by solving a set of standard boundary element systems of equations. In order to accelerate the solution procedure, the parameters affecting the accuracy and efficiency of the method are studied and two contour paths are compared. Moreover, a wideband fast multipole method is implemented with a block IDR (s) solver to reduce the overall solution cost of the boundary element systems of equations with multiple right-hand sides. The Burton-Miller formulation is employed to identify the fictitious eigenfrequencies of the interior acoustic problems with multiply connected domains. The actual effect of the Burton-Miller formulation on tackling the fictitious eigenfrequency problem is investigated and the optimal choice of the coupling parameter as α = i / k is confirmed through exterior sphere examples. Furthermore, the numerical eigenvalues obtained by the developed method are compared with the results obtained by the finite element method to show the accuracy and efficiency of the developed method.
Earthquake prediction: Simple methods for complex phenomena
NASA Astrophysics Data System (ADS)
Luen, Bradley
2010-09-01
Earthquake predictions are often either based on stochastic models, or tested using stochastic models. Tests of predictions often tacitly assume predictions do not depend on past seismicity, which is false. We construct a naive predictor that, following each large earthquake, predicts another large earthquake will occur nearby soon. Because this "automatic alarm" strategy exploits clustering, it succeeds beyond "chance" according to a test that holds the predictions _xed. Some researchers try to remove clustering from earthquake catalogs and model the remaining events. There have been claims that the declustered catalogs are Poisson on the basis of statistical tests we show to be weak. Better tests show that declustered catalogs are not Poisson. In fact, there is evidence that events in declustered catalogs do not have exchangeable times given the locations, a necessary condition for the Poisson. If seismicity followed a stochastic process, an optimal predictor would turn on an alarm when the conditional intensity is high. The Epidemic-Type Aftershock (ETAS) model is a popular point process model that includes clustering. It has many parameters, but is still a simpli_cation of seismicity. Estimating the model is di_cult, and estimated parameters often give a non-stationary model. Even if the model is ETAS, temporal predictions based on the ETAS conditional intensity are not much better than those of magnitude-dependent automatic (MDA) alarms, a much simpler strategy with only one parameter instead of _ve. For a catalog of Southern Californian seismicity, ETAS predictions again o_er only slight improvement over MDA alarms
Exact kinetic energy enables accurate evaluation of weak interactions by the FDE-vdW method
Sinha, Debalina; Pavanello, Michele
2015-08-28
The correlation energy of interaction is an elusive and sought-after interaction between molecular systems. By partitioning the response function of the system into subsystem contributions, the Frozen Density Embedding (FDE)-vdW method provides a computationally amenable nonlocal correlation functional based on the adiabatic connection fluctuation dissipation theorem applied to subsystem density functional theory. In reproducing potential energy surfaces of weakly interacting dimers, we show that FDE-vdW, either employing semilocal or exact nonadditive kinetic energy functionals, is in quantitative agreement with high-accuracy coupled cluster calculations (overall mean unsigned error of 0.5 kcal/mol). When employing the exact kinetic energy (which we term the Kohn-Sham (KS)-vdW method), the binding energies are generally closer to the benchmark, and the energy surfaces are also smoother.
Exact kinetic energy enables accurate evaluation of weak interactions by the FDE-vdW method.
Sinha, Debalina; Pavanello, Michele
2015-08-28
The correlation energy of interaction is an elusive and sought-after interaction between molecular systems. By partitioning the response function of the system into subsystem contributions, the Frozen Density Embedding (FDE)-vdW method provides a computationally amenable nonlocal correlation functional based on the adiabatic connection fluctuation dissipation theorem applied to subsystem density functional theory. In reproducing potential energy surfaces of weakly interacting dimers, we show that FDE-vdW, either employing semilocal or exact nonadditive kinetic energy functionals, is in quantitative agreement with high-accuracy coupled cluster calculations (overall mean unsigned error of 0.5 kcal/mol). When employing the exact kinetic energy (which we term the Kohn-Sham (KS)-vdW method), the binding energies are generally closer to the benchmark, and the energy surfaces are also smoother.
A Variable Coefficient Method for Accurate Monte Carlo Simulation of Dynamic Asset Price
NASA Astrophysics Data System (ADS)
Li, Yiming; Hung, Chih-Young; Yu, Shao-Ming; Chiang, Su-Yun; Chiang, Yi-Hui; Cheng, Hui-Wen
2007-07-01
In this work, we propose an adaptive Monte Carlo (MC) simulation technique to compute the sample paths for the dynamical asset price. In contrast to conventional MC simulation with constant drift and volatility (μ,σ), our MC simulation is performed with variable coefficient methods for (μ,σ) in the solution scheme, where the explored dynamic asset pricing model starts from the formulation of geometric Brownian motion. With the method of simultaneously updated (μ,σ), more than 5,000 runs of MC simulation are performed to fulfills basic accuracy of the large-scale computation and suppresses statistical variance. Daily changes of stock market index in Taiwan and Japan are investigated and analyzed.
NASA Technical Reports Server (NTRS)
Kemp, James Herbert (Inventor); Talukder, Ashit (Inventor); Lambert, James (Inventor); Lam, Raymond (Inventor)
2008-01-01
A computer-implemented system and method of intra-oral analysis for measuring plaque removal is disclosed. The system includes hardware for real-time image acquisition and software to store the acquired images on a patient-by-patient basis. The system implements algorithms to segment teeth of interest from surrounding gum, and uses a real-time image-based morphing procedure to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The system integrates these components into a single software suite with an easy-to-use graphical user interface (GUI) that allows users to do an end-to-end run of a patient record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image.
Methods to achieve accurate projection of regional and global raster databases
Usery, E.L.; Seong, J.C.; Steinwand, D.R.; Finn, M.P.
2002-01-01
This research aims at building a decision support system (DSS) for selecting an optimum projection considering various factors, such as pixel size, areal extent, number of categories, spatial pattern of categories, resampling methods, and error correction methods. Specifically, this research will investigate three goals theoretically and empirically and, using the already developed empirical base of knowledge with these results, develop an expert system for map projection of raster data for regional and global database modeling. The three theoretical goals are as follows: (1) The development of a dynamic projection that adjusts projection formulas for latitude on the basis of raster cell size to maintain equal-sized cells. (2) The investigation of the relationships between the raster representation and the distortion of features, number of categories, and spatial pattern. (3) The development of an error correction and resampling procedure that is based on error analysis of raster projection.
A second-order accurate immersed boundary-lattice Boltzmann method for particle-laden flows
Zhou, Qiang; Fan, Liang-Shih
2014-07-01
A new immersed boundary-lattice Boltzmann method (IB-LBM) is presented for fully resolved simulations of incompressible viscous flows laden with rigid particles. The immersed boundary method (IBM) recently developed by Breugem (2012) [19] is adopted in the present method, development including the retraction technique, the multi-direct forcing method and the direct account of the inertia of the fluid contained within the particles. The present IB-LBM is, however, formulated with further improvement with the implementation of the high-order Runge–Kutta schemes in the coupled fluid–particle interaction. The major challenge to implement high-order Runge–Kutta schemes in the LBM is that the flow information such as density and velocity cannot be directly obtained at a fractional time step from the LBM since the LBM only provides the flow information at an integer time step. This challenge can be, however, overcome as given in the present IB-LBM by extrapolating the flow field around particles from the known flow field at the previous integer time step. The newly calculated fluid–particle interactions from the previous fractional time steps of the current integer time step are also accounted for in the extrapolation. The IB-LBM with high-order Runge–Kutta schemes developed in this study is validated by several benchmark applications. It is demonstrated, for the first time, that the IB-LBM has the capacity to resolve the translational and rotational motion of particles with the second-order accuracy. The optimal retraction distances for spheres and tubes that help the method achieve the second-order accuracy are found to be around 0.30 and −0.47 times of the lattice spacing, respectively. Simulations of the Stokes flow through a simple cubic lattice of rotational spheres indicate that the lift force produced by the Magnus effect can be very significant in view of the magnitude of the drag force when the practical rotating speed of the spheres is encountered
NASA Technical Reports Server (NTRS)
Ko, William L.
1987-01-01
Accuracies of the Southwell method and the force/stiffness (F/S) method are examined when the methods were used in the prediction of buckling loads of hypersonic aircraft wing tubular panels, based on nondestructive buckling test data. Various factors affecting the accuracies of the two methods were discussed. Effects of load cutoff point in the nondestructive buckling tests on the accuracies of the two methods were discussed in great detail. For the tubular panels under pure compression, the F/S method was found to give more accurate buckling load predictions than the Southwell method, which excessively overpredicts the buckling load. It was found that the Southwell method required a higher load cutoff point, as compared with the F/S method. In using the F/S method for predicting the buckling load of tubular panels under pure compression, the load cutoff point of approximately 50 percent of the critical load could give reasonably accurate predictions.
EEMD based pitch evaluation method for accurate grating measurement by AFM
NASA Astrophysics Data System (ADS)
Li, Changsheng; Yang, Shuming; Wang, Chenying; Jiang, Zhuangde
2016-09-01
The pitch measurement and AFM calibration precision are significantly influenced by the grating pitch evaluation method. This paper presents the ensemble empirical mode decomposition (EEMD) based pitch evaluation method to relieve the accuracy deterioration caused by high and low frequency components of scanning profile during pitch evaluation. The simulation analysis shows that the application of EEMD can improve the pitch accuracy of the FFT-FT algorithm. The pitch error is small when the iteration number of the FFT-FT algorithms was 8. The AFM measurement of the 500 nm-pitch one-dimensional grating shows that the EEMD based pitch evaluation method could improve the pitch precision, especially the grating line position precision, and greatly expand the applicability of the gravity center algorithm when particles and impression marks were distributed on the sample surface. The measurement indicates that the nonlinearity was stable, and the nonlinearity of x axis and forward scanning was much smaller than their counterpart. Finally, a detailed pitch measurement uncertainty evaluation model suitable for commercial AFMs was demonstrated and a pitch uncertainty in the sub-nanometer range was achieved. The pitch uncertainty was reduced about 10% by EEMD.
NASA Astrophysics Data System (ADS)
Sakuma, Hiroki; Okamoto, Atsushi; Shibukawa, Atsushi; Goto, Yuta; Tomita, Akihisa
2016-02-01
We propose a spatial mode generation technology using spatial cross modulation (SCM) for mode division multiplexing (MDM). The most well-known method for generating arbitrary complex amplitude fields is to display an off-axis computer-generated hologram (CGH) on a spatial light modulator (SLM). However, in this method, a desired complex amplitude field is obtained with first order diffraction light. This critically lowers the light utilization efficiency. On the other hand, in the SCM, the desired complex field is provided with zeroth order diffraction light. For this reason, our technology can generate spatial modes with large light utilization efficiency in addition to high accuracy. In this study, first, a numerical simulation was performed to verify that the SCM is applicable for spatial mode generation. Next, we made a comparison from two view points of the coupling efficiency and the light utilization between our technology and the technology using an off-axis amplitude hologram as a representative complex amplitude generation method. The simulation results showed that our technology can achieve considerably high light utilization efficiency while maintaining the enough coupling efficiency comparable to the technology using an off-axis amplitude hologram. Finally, we performed an experiment on spatial modes generation using the SCM. Experimental results showed that our technology has the great potential to realize the spatial mode generation with high accuracy.
A Gene-Specific Method for Predicting Hemophilia-Causing Point Mutations
Hamasaki-Katagiri, Nobuko; Salari, Raheleh; Wu, Andrew; Qi, Yini; Schiller, Tal; Filiberto, Amanda C.; Schisterman, Enrique F.; Komar, Anton A.; Przytycka, Teresa M.; Kimchi-Sarfaty, Chava
2014-01-01
A fundamental goal of medical genetics is the accurate prediction of genotype–phenotype correlations. As an approach to develop more accurate in silico tools for prediction of disease-causing mutations of structural proteins, we present a gene- and disease-specific prediction tool based on a large systematic analysis of missense mutations from hemophilia A (HA) patients. Our HA-specific prediction tool, HApredictor, showed disease prediction accuracy comparable to other publicly available prediction software. In contrast to those methods, its performance is not limited to non-synonymous mutations. Given the role of synonymous mutations in disease and drug codon optimization, we propose that utilizing a gene- and disease-specific method can be highly useful to make functional predictions possible even for synonymous mutations. Incorporating computational metrics at both nucleotide and amino acid levels along with multiple protein sequence/structure alignment significantly improved the predictive performance of our tool. HApredictor is freely available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Przytycka/HA_Predict/index.htm. PMID:23920358
Potential Method of Predicting Coronal Mass Ejection
NASA Astrophysics Data System (ADS)
Imholt, Timothy
2001-10-01
Coronal Mass Ejections (CME) may be described as a blast of gas and highly charged solar mass fragments ejected into space. These ejections, when directed toward Earth, have many different effects on terrestrial systems ranging from the Aurora Borealis to changes in wireless communication. The early prediction of these solar events cannot be overlooked. There are several models currently accepted and utilized to predict these events, however, with earlier prediction of both the event and the location on the sun where the event occurs allows us to have earlier warnings as to when they will affect man-made systems. A better prediction could perhaps be achieved by utilizing low angular resolution radio telescope arrays to catalog data from the sun at different radio frequencies on a regular basis. Once this data is cataloged a better predictor for these CME’s could be found. We propose a model that allows a prediction to be made that appears to be longer than 24 hours.
Potential Method of Predicting Coronal Mass Ejection
NASA Astrophysics Data System (ADS)
Imholt, Timothy; Roberts, J. A.; Scott, J. B.; University Of North Texas Team
2000-10-01
Coronal Mass Ejections (CME) may be described as a blast of gas and highly charged solar mass fragments ejected into space. These ejections, when directed toward Earth, have many different effects on terrestrial systems ranging from the Aurora Borealis to changes in wireless communications. The importance of an early prediction of these solar events cannot be overlooked. There are several models currently accepted and utilized to predict these events, however, with earlier prediction of both the event and the location on the sun where the event occur allows us to have earlier warnings as to when they will effect man-made systems. A better prediction could perhaps be achieved by utilizing low angular resolution radio telescope arrays to catalog data from the sun at different radio frequencies on a regular basis. Once this data is cataloged a better predictor for these CME's could be found. We propose a model that allows a prediction to be made that appears to be longer than 24 hours.
Mayes, Janice M; Mouraviev, Vladimir; Sun, Leon; Tsivian, Matvey; Madden, John F; Polascik, Thomas J
2011-01-01
We evaluate the reliability of routine sextant prostate biopsy to detect unilateral lesions. A total of 365 men with complete records including all clinical and pathologic variables who underwent a preoperative sextant biopsy and subsequent radical prostatectomy (RP) for clinically localized prostate cancer at our medical center between January 1996 and December 2006 were identified. When the sextant biopsy detects unilateral disease, according to RP results, the NPV is high (91%) with a low false negative rate (9%). However, the sextant biopsy has a PPV of 28% with a high false positive rate (72%). Therefore, a routine sextant prostate biopsy cannot provide reliable, accurate information about the unilaterality of tumor lesion(s).
Integrated method for combustion stability prediction
NASA Astrophysics Data System (ADS)
Yu, Y. C.; O'Hara, L.; Smith, R. J.; Anderson, W. E.; Merkle, C. L.
2011-10-01
Major obstacles in overcoming combustion instability include the absence of a mechanistic and a priori prediction capability, and the difficulty in studying instability in the laboratory due to the perceived need for testing at the full-scale pressure and geometry to ensure that important processes are maintained. A hierarchal approach toward combustion instability is described that comprises experiment, analysis, and highfidelity computation to develop combustion response submodels that can be used in engineering-level design analysis. The paper provides an illustrative example of how these elements are used to develop a prediction for growth rates in model rocket combustors that generate spontaneous longitudinal combustion instabilities.
An Accurate Calibration Method Based on Velocity in a Rotational Inertial Navigation System.
Zhang, Qian; Wang, Lei; Liu, Zengjun; Feng, Peide
2015-07-28
Rotation modulation is an effective method to enhance the accuracy of an inertial navigation system (INS) by modulating the gyroscope drifts and accelerometer bias errors into periodically varying components. The typical RINS drives the inertial measurement unit (IMU) rotation along the vertical axis and the horizontal sensors' errors are modulated, however, the azimuth angle error is closely related to vertical gyro drift, and the vertical gyro drift also should be modulated effectively. In this paper, a new rotation strategy in a dual-axis rotational INS (RINS) is proposed and the drifts of three gyros could be modulated, respectively. Experimental results from a real dual-axis RINS demonstrate that the maximum azimuth angle error is decreased from 0.04° to less than 0.01° during 1 h. Most importantly, the changing of rotation strategy leads to some additional errors in the velocity which is unacceptable in a high-precision INS. Then the paper studies the basic reason underlying horizontal velocity errors in detail and a relevant new calibration method is designed. Experimental results show that after calibration and compensation, the fluctuation and stages in the velocity curve disappear and velocity precision is improved.
Boyle, John J.; Kume, Maiko; Wyczalkowski, Matthew A.; Taber, Larry A.; Pless, Robert B.; Xia, Younan; Genin, Guy M.; Thomopoulos, Stavros
2014-01-01
When mechanical factors underlie growth, development, disease or healing, they often function through local regions of tissue where deformation is highly concentrated. Current optical techniques to estimate deformation can lack precision and accuracy in such regions due to challenges in distinguishing a region of concentrated deformation from an error in displacement tracking. Here, we present a simple and general technique for improving the accuracy and precision of strain estimation and an associated technique for distinguishing a concentrated deformation from a tracking error. The strain estimation technique improves accuracy relative to other state-of-the-art algorithms by directly estimating strain fields without first estimating displacements, resulting in a very simple method and low computational cost. The technique for identifying local elevation of strain enables for the first time the successful identification of the onset and consequences of local strain concentrating features such as cracks and tears in a highly strained tissue. We apply these new techniques to demonstrate a novel hypothesis in prenatal wound healing. More generally, the analytical methods we have developed provide a simple tool for quantifying the appearance and magnitude of localized deformation from a series of digital images across a broad range of disciplines. PMID:25165601
An Accurate Calibration Method Based on Velocity in a Rotational Inertial Navigation System
Zhang, Qian; Wang, Lei; Liu, Zengjun; Feng, Peide
2015-01-01
Rotation modulation is an effective method to enhance the accuracy of an inertial navigation system (INS) by modulating the gyroscope drifts and accelerometer bias errors into periodically varying components. The typical RINS drives the inertial measurement unit (IMU) rotation along the vertical axis and the horizontal sensors’ errors are modulated, however, the azimuth angle error is closely related to vertical gyro drift, and the vertical gyro drift also should be modulated effectively. In this paper, a new rotation strategy in a dual-axis rotational INS (RINS) is proposed and the drifts of three gyros could be modulated, respectively. Experimental results from a real dual-axis RINS demonstrate that the maximum azimuth angle error is decreased from 0.04° to less than 0.01° during 1 h. Most importantly, the changing of rotation strategy leads to some additional errors in the velocity which is unacceptable in a high-precision INS. Then the paper studies the basic reason underlying horizontal velocity errors in detail and a relevant new calibration method is designed. Experimental results show that after calibration and compensation, the fluctuation and stages in the velocity curve disappear and velocity precision is improved. PMID:26225983
A novel method for more accurately mapping the surface temperature of ultrasonic transducers.
Axell, Richard G; Hopper, Richard H; Jarritt, Peter H; Oxley, Chris H
2011-10-01
This paper introduces a novel method for measuring the surface temperature of ultrasound transducer membranes and compares it with two standard measurement techniques. The surface temperature rise was measured as defined in the IEC Standard 60601-2-37. The measurement techniques were (i) thermocouple, (ii) thermal camera and (iii) novel infra-red (IR) "micro-sensor." Peak transducer surface measurements taken with the thermocouple and thermal camera were -3.7 ± 0.7 (95% CI)°C and -4.3 ± 1.8 (95% CI)°C, respectively, within the limits of the IEC Standard. Measurements taken with the novel IR micro-sensor exceeded these limits by 3.3 ± 0.9 (95% CI)°C. The ambiguity between our novel method and the standard techniques could have direct patient safety implications because the IR micro-sensor measurements were beyond set limits. The spatial resolution of the measurement technique is not well defined in the IEC Standard and this has to be taken into consideration when selecting which measurement technique is used to determine the maximum surface temperature.
A method for the accurate and smooth approximation of standard thermodynamic functions
NASA Astrophysics Data System (ADS)
Coufal, O.
2013-01-01
A method is proposed for the calculation of approximations of standard thermodynamic functions. The method is consistent with the physical properties of standard thermodynamic functions. This means that the approximation functions are, in contrast to the hitherto used approximations, continuous and smooth in every temperature interval in which no phase transformations take place. The calculation algorithm was implemented by the SmoothSTF program in the C++ language which is part of this paper. Program summaryProgram title:SmoothSTF Catalogue identifier: AENH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3807 No. of bytes in distributed program, including test data, etc.: 131965 Distribution format: tar.gz Programming language: C++. Computer: Any computer with gcc version 4.3.2 compiler. Operating s